List of Archived Posts

2022 Newsgroup Postings (01/01 - 02/08)

Internet
LLMPS, MPIO, DEBE
Capitol rioters' tears, remorse don't spare them from jail
GML/SGML/HTML/Mosaic
GML/SGML/HTML/Mosaic
360 IPL
360/65, 360/67, 360/75
DEC VAX, VAX/Cluster and HA/CMP
DEC VAX, VAX/Cluster and HA/CMP
Capitol rioters' tears, remorse don't spare them from jail
360/65, 360/67, 360/75
Home Computers
Programming Skills
Mainframe I/O
Mainframe I/O
Mainframe I/O
IBM Clone Controllers
Mainframe I/O
Supercomputers w/creditcard
FS: IBM PS/2 VGA Moni
Service Processor
Departmental/distributed 4300s
IBM IBU (Independent Business Unit)
Target Marketing
Departmental/distributed 4300s
CP67 and BPS Loader
Is this group only about older computers?
Mainframe System Meter
Capitol rioters' tears, remorse don't spare them from jail
IBM HONE
CP67 and BPS Loader
370/195
KPMG auditors forged documents to avoid criticism, tribunal heard
138/148
1443 printer
Error Handling
Error Handling
Error Handling
IBM CICS
Mainframe I/O
Mythical Man Month
370/195
Automated Benchmarking
Automated Benchmarking
Automated Benchmarking
Automated Benchmarking
Automated Benchmarking
IBM Conduct
Mainframe Career
Acoustic Coupler
Science Fiction is a Luddite Literature
Haiti, Smedley Butler, and the Rise of American Empire
Acoustic Coupler
Automated Benchmarking
Automated Benchmarking
Precursor to current virtual machines and containers
370 Architecture Redbook
Computer Security
Computer Security
370 Architecture Redbook
370/195
File Backup
File Backup
Calma, 3277GA, 2250-4
370/195
CMSBACK
HSDT, EARN, BITNET, Internet
HSDT, EARN, BITNET, Internet
Financialization of Housing in Europe Is Intensifying
IBM Bus&Tag Channels
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
MVT storage management issues
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
HSDT, EARN, BITNET, Internet
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
165/168/3033 & 370 virtual memory
Virtual Machine SIE instruction
165/168/3033 & 370 virtual memory
Mainframe Benchmark
HSDT SFS (spool file rewrite)
Virtual Machine SIE instruction
370/195
Virtual Machine SIE instruction
165/168/3033 & 370 virtual memory
Navy confirms video and photo of F-35 that crashed in South China Sea are real
ECPS Microcode Assist
Processor, DASD, VTAM & TCP/IP performance
HSDT Pitches
VM/370 Interactive Response
Latency and Throughput
370/195
9/11 and the Road to War
Virtual Machine SIE instruction
Science Fiction is a Luddite Literature
IBM PLI
Online Computer Conferencing
Online Computer Conferencing
Online Computer Conferencing
Mainframe Performance
IBM PLI
The Cult of Trump is actually comprised of MANY other Christian cults
The Cult of Trump is actually comprised of MANY other Christian cults
Not counting dividends IBM delivered an annualized yearly loss of 2.27%
Not counting dividends IBM delivered an annualized yearly loss of 2.27%
Not counting dividends IBM delivered an annualized yearly loss of 2.27%
On the origin of the /text section/ for code
GM C4 and IBM HA/CMP
On the origin of the /text section/ for code
On the origin of the /text section/ for code
Newt Gingrich started us on the road to ruin. Now, he's back to finish the job
On the origin of the /text section/ for code
GM C4 and IBM HA/CMP
GM C4 and IBM HA/CMP
Amazon Just Poured Fuel on the Web3 Fire
Series/1 VTAM/NCP
HSDT & Clementi's Kinston E&S lab
SHARE LSRAD Report
SHARE LSRAD Report
TCP/IP and Mid-range market
TCP/IP and Mid-range market
On the origin of the /text section/ for code
On why it's CR+LF and not LF+CR [ASR33]
SHARE LSRAD Report
Dataprocessing Career

Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 01 Jan 2022
Blog: Facebook
In the early 80s, I had HSDT project ... T1 and faster computer links (both terrestrial and satellite), was also working with NSF director and suppose to get $20M to interconnect the NSF supercomputer centers ... then congress cuts the budget, some other things happen and finally an RFP was released.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

NSF was going to give UC $120M for Berkeley supercomputer center and I was giving presentations up there on the interconnect. Presumably as a result, I was also asked to work with the Berkeley 10M group ... that was also working on transition from film to digital and high-speed transmission would be part of it ... they were doing some testing up at Link Observatory ... visits to look at how it was all going was part of it. At the time they had a 200x200 CCD (40K) and some talk about getting 400x400 (160k) ... but there were also rumors that Spielberg was working with 2Kx3K CCD (6M) for transition of Hollywood from film to digital. Eventually they get $80M grant from Keck Foundation and it becomes Keck 10M & observatory.
https://www.keckobservatory.org/

Turns out UC Regents building plan had UCSD getting the next new building and the Berkeley supercomputer center becomes the UCSD supercomputer center.

Old post with Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

Internal IBM politics prevent us from bidding, the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse. The winning bid doesn't even install T1 links called for ... they are 440kbit/sec links ... but apparently to make it look like its meeting the requirements, they install telco multiplexors with T1 trunks (running multiple links/trunk). We periodically ridicule them that why don't they call it a T5 network (because some of those T1 trunks would in turn be multiplexed over T3 or even T5 trunks). Also the telco resources contributed were over four times the bid ... was to help promote the evolution of new bandwidth hungry applications w/o impacting their existing revenue streams.

as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

after leaving IBM we were brought in as consultants to small client/server startup by two former Oracle people (that we had worked with on cluster scale-up for our HA/CMP IBM product) who were responsible for something they called "commerce server" and they wanted to do payment transactions ... the startup had also invented this technology called "SSL" they wanted to use, the result is now frequently called "electronic commerce".

Later in the 90s we were working with many of the large web hosting operations (on payment transactions). One large operation pointed out that they had at least ten PORN servers that all had higher monthly hits than the published #1 ranked webservers (based on monthly hits). They also pointed out that the software/game servers had something like 50% credit card fraud compared to nearly zero for the PORN servers (cynical claim that PORN users were significantly more ethical than software/game users).

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

as aside, in Jan1999, I was asked to try and help stop the coming economic mess (we failed). They said that some investment bankers had walked away "clean" from the 80s S&L crisis, were then running Internet IPO mills (invest a few million, hype, IPO for a couple billion, which then needed to fail to leave the field clear for the next round of IPOs) and were predicted to get next into securitized mortgages (I was to improve the integrity of mortgage supporting documents as countermeasure, however they then found that they could pay the rating agencies for triple-A ... and could start doing liar, no-documentation mortgages ... no-documentation, no documentation integrity).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
S&&L crisis
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
(triple-A rated) toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo

some past posts mentioning Keck
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting
https://www.garlic.com/~lynn/2018d.html#76 George Lucas reveals his plan for Star Wars 7 through 9--and it was awful
https://www.garlic.com/~lynn/2015.html#20 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2014h.html#56 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014.html#76 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#8 We're About to Lose Net Neutrality -- And the Internet as We Know It
https://www.garlic.com/~lynn/2012o.html#55 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek
https://www.garlic.com/~lynn/2012k.html#10 Slackware
https://www.garlic.com/~lynn/2011d.html#9 Hawaii board OKs plan for giant telescope
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2009m.html#85 ATMs by the Numbers
https://www.garlic.com/~lynn/2009m.html#82 ATMs by the Numbers
https://www.garlic.com/~lynn/2006t.html#12 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2004h.html#7 CCD technology

--
virtualization experience starting Jan1968, online at home since Mar1970

LLMPS, MPIO, DEBE

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: LLMPS, MPIO, DEBE
Date: 02 Jan 2022
Blog: Facebook
MIT Lincoln Labs did a DEBE like utility "LLMPS" (for Lincoln Labs MultiProgramming Supervisor) that they contributed to SHARE library (it was more like a generalized monitor preloaded with a lot of DEBE like functions). It turns out that Univ. of Michigan used it for scaffolding/building their MTS virtual memory operating system for 360/67. Some past (MTS LLMPS) references
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/discussions/anecdotes-comments-observations/8-1someinformationaboutllmps
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/discussions/anecdotes-comments-observations/8didanythingofllmpsremainaspartofummps

MTS archive if references my references me
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/discussions/published-information/8mtsismentionedinlynnwheelersgarliccommailinglistarchive

trivia: end of semester after taking 2hr intro fortran/computers, I was hired as student programmer to do something similar (to some of LLMPS) reimplement 1401 MPIO (tape<->unit record) on 360/30 .. given 360 princ-ops, assembler, bunch of hardware manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... within a few weeks had a 2000 card assembler program

some recent posts mentioning MPIO
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#72 In U.S., Far More Support Than Oppose Separation of Church and State
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021i.html#59 The Uproar Ovear the "Ultimate American Bible"
https://www.garlic.com/~lynn/2021i.html#36 We've Structured Our Economy to Redistribute a Massive Amount of Income Upward
https://www.garlic.com/~lynn/2021f.html#98 No, the Vikings Did Not Discover America
https://www.garlic.com/~lynn/2021f.html#79 Where Would We Be Without the Paper Punch Card?
https://www.garlic.com/~lynn/2021f.html#46 Under God
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#19 1401 MPIO

other posts mentioning LLMPS
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2021b.html#26 DEBE?
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017g.html#30 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2016c.html#6 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2015c.html#92 DEBE?
https://www.garlic.com/~lynn/2014j.html#50 curly brace languages source code style quides
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2009p.html#76 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2007u.html#85 IBM Floating-point myths
https://www.garlic.com/~lynn/2007u.html#23 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at Lincoln Labs
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2006m.html#42 Why Didn't The Cent Sign or the Exclamation Mark Print?
https://www.garlic.com/~lynn/2006k.html#41 PDP-1
https://www.garlic.com/~lynn/2005g.html#56 Software for IBM 360/30
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004d.html#31 someone looking to donate IBM magazines and stuff
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitol rioters' tears, remorse don't spare them from jail

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitol rioters' tears, remorse don't spare them from jail
Date: 02 Jan 2022
Blog: Facebook
Capitol rioters' tears, remorse don't spare them from jail
https://www.msn.com/en-us/news/us/capitol-rioters-tears-remorse-don-t-spare-them-from-jail/ar-AASlA1u?ocid=msedgdhp&pc=U531

there were multiple articles Jan2021 about Jan6 events being sedition ... then little or nothing

Federal 'Strike Force' Builds Sedition Cases Against Capitol Rioters. Will It Work?
https://www.npr.org/2021/01/13/956285582/federal-strike-force-builds-sedition-cases-against-capitol-rioters-will-it-work
U.S. pursuing seditious conspiracy cases in 'unprecedented' probe of Capitol assault
https://www.reuters.com/article/us-usa-election-capitol-arrests/u-s-pursuing-seditious-conspiracy-cases-in-unprecedented-probe-of-capitol-assault-idUSKBN29H23C

if Jan6 activity is established as seditious conspiracy
https://en.wikipedia.org/wiki/Sedition
https://en.wikipedia.org/wiki/Seditious_conspiracy
... then all members of conspiracy are guilty of felony murder if deaths occur during the conspiracy events ... see it all the time on TV where bank robbery getaway driver is convicted of felony murder for any deaths that occurred in the bank during the robbery
https://en.wikipedia.org/wiki/Felony_murder_rule

https://en.wikipedia.org/wiki/Sedition

Sedition is overt conduct, such as speech and organization, that tends toward rebellion against the established order. Sedition often includes subversion of a constitution and incitement of discontent toward, or insurrection against, established authority. Sedition may include any commotion, though not aimed at direct and open violence against the laws. Seditious words in writing are seditious libel. A seditionist is one who engages in or promotes the interest of sedition.

... snip ...

https://en.wikipedia.org/wiki/Seditious_conspiracy

For a seditious conspiracy charge to be effected, a crime need only be planned, it need not be actually attempted. According to Andres Torres and Jose E. Velazquez, the accusation of seditious conspiracy is of political nature and was used almost exclusively against Puerto Rican independentistas in the twentieth century.[1] However, the act was also used in the twentieth century against communists (United Freedom Front),[2] neo-Nazis,[3] and terrorists such as the Provisional IRA in Massachusetts and Omar Abdel-Rahman.[4]

... snip ...

https://en.wikipedia.org/wiki/Felony_murder_rule

The rule of felony murder is a legal doctrine in some common law jurisdictions that broadens the crime of murder: when an offender kills (regardless of intent to kill) in the commission of a dangerous or enumerated crime (called a felony in some jurisdictions), the offender, and also the offender's accomplices or co-conspirators, may be found guilty of murder.

... snip ...

felony murder, sedition, and/or jan 6th posts
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021g.html#58 The Storm Is Upon Us
https://www.garlic.com/~lynn/2021.html#51 Sacking the Capital and Honor
https://www.garlic.com/~lynn/2021.html#32 Fascism

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

GML/SGML/HTML/Mosaic

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: GML/SGML/HTML/Mosaic
Date: 03 Jan 2022
Blog: Facebook
In mid-60s, CTSS RUNOFF was rewritten for CP67/CMS as SCRIPT. Then in 1969, GML was invented at the IBM science center and GML tag processing added to SCRIPT.
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm

After a decade, GML morphs into ISO standard SGML ... and after another decade SGML morphs into HTML at CERN

there is this periodic discussion about original html/sgml morph or application ... the explanation was that making it an sgml application would have taken much longer and required more resources. HTML tags were defined to have sgml-like definitions with none of overhead and infrastructure ... making it much quicker to implement and deploy
http://infomesh.net/html/history/early
... from above, then 1992 & later:

"However, HTML suffered greatly from the lack of standardization, and the dodgy parsing techniques allowed by Mosaic (in 1993). If HTML had been precisely defined as having to have an SGML DTD, it may not have become as popular as fast, but it would have been a lot architecturally stronger."

... snip ...

also references
https://www.w3.org/MarkUp/html-spec/index.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML/SGML/HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

other trivia, 1st webserver in the us was on the SLAC VM370/CMS system
http://www.slac.stanford.edu/history/earlyweb/history.shtml
http://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

I had very little to do with browser at mosaic/netscape (trivia: when NCSA complained about use of "mosaic" ... which company provided them with "netscape"?).

mosaic/netscape trivia ... last product at IBM was HA/CMP ... it started out HA/6000 for NYTimes to move their newspaper system (ATEX) from DEC VAX/Cluster to IBM RS/6000 ... but I renamed it HA/CMP (High Availability Cluster Multi-Processing) when started doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (oracle, sybase, informix, ingres). Then early 1992, cluster scale-up is transferred, announced as IBM supercomputer and we were told we couldn't work on anything with more than 4 processors. We leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Later we are brought into small client/server startup as consultants, two of the former oracle people that we had worked with on cluster scale-up, are there responsible for something called "commerce server" and they want to do payment transactions on the server, the startup had also invented this technology they called "SSL" they want to use, it is now frequently called "electronic commerce". I had absolute authority over everything between the servers and payment networks ... but could only make recommendations on the browser/server side (some of which were almost immediately violated resulting in lots of exploits).

some recemt posts mentioning ecommerce:
https://www.garlic.com/~lynn/2022.html#0 Internet
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#32 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021j.html#32 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021j.html#30 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#87 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#46 Cloud Computing
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#86 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2021b.html#90 IBM Innovation
https://www.garlic.com/~lynn/2021.html#70 Life After IBM
https://www.garlic.com/~lynn/2021.html#43 Dialup Online Banking
https://www.garlic.com/~lynn/2020.html#19 What is a mainframe?

--
virtualization experience starting Jan1968, online at home since Mar1970

GML/SGML/HTML/Mosaic

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: GML/SGML/HTML/Mosaic
Date: 04 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#3 GML/SGML/HTML/Mosaic

Original HONE (world-wide, online sales&marketing support) was result of the 23June1969 unbundling announcement ... starting to charge for maint., software, se services, etc. SE training had included sort of journeyman program ... part of large group of SEs on site at customer. However with unbundling, IBM couldn't figure out how *NOT* to charge for the trainee SEs at customer account ... and thus was born HONE ... originally branch online access to CP67 systems for SEs running guest operating systems in virtual machine. Science Center had also done port of APL\360 to CMS for CMS\APL ... redid APL\360 storage management to go from 16kbyte workspaces to large virtual machine address spaces ... and added API to access system services (like file I/O) enabling real-world applications ... and APL-based applications started to appear on HONE for sales&marketing support (eventually the sales&marketing support applications start to dominate all HONE activity ... and the SE training with guest operating systems just withersw away). US HONE, even growing to 16 168-3 processors was still being overloaded with all these compute intensive APL-based applications. Then some of the most compute-intensive APL-based applications were recoded in Fortran-H. However, one of the largest APL-based applications was SEQUOIA which provided the online interface for the mostly computer illiterate sales&marketing force. APL SEQUOIA was then responsible for invoking all the other HONE APL applications. They then needed a way for APL\CMS application (SEQUOIA) to transparently invoke FORTRAN-H applications ... and transparently return to SEQUOIA when done.

This contributed to another major HONE issue starting in the late 70s ... branch manager would be promoted to DPD executive responsible for group that included HONE ... and would be horrified to find out that HONE was VM370-based, not MVS-based (not understanding anything about technology, but believing the IBM marketing) ... and figure their career would be really made if he was responsible for converting HONE to MVS. They would direct all HONE resources for the MVS-conversion ... after a year it would be declared a success, the executive promoted (heads roll uphill) and things settle down to VM370. Then it would be shortly repeated ... going through 3-4 such cycles ... until somebody came up with the reason that HONE couldn't be moved off VM370 to MVS was because they ran my enhanced operating system (i.e. after joining IBM, one of my hobbies was enhanced operating systems for interenal datacenters and HONE was long time customer). The HONE "problem" could be solved by directing them to move to standard supported VM370 system (because what would happen if I was hit by a bus) ... which would then enable HONE to be moved to MVS systems.

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE and/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

some specific posts mentioning SEQUOIA:
https://www.garlic.com/~lynn/2021k.html#34 APL
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#33 HONE story/history
https://www.garlic.com/~lynn/2019b.html#26 This Paper Map Shows The Extent Of The Entire Internet In 1973
https://www.garlic.com/~lynn/2019b.html#14 Tandem Memo
https://www.garlic.com/~lynn/2012.html#14 HONE
https://www.garlic.com/~lynn/2011e.html#72 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010i.html#13 IBM 5100 First Portable Computer commercial 1977
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2007h.html#62 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002j.html#3 HONE, Aid, misc
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?

--
virtualization experience starting Jan1968, online at home since Mar1970

360 IPL

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 IPL
Date: 04 Jan 2022
Blog: Facebook
I never figured out why some 360 system consoles were 009 and some 01f. I had a different problem with system console and system would just ring the bell and stop at IPL. Within a year of taking two semester hr intro to computers/fortran, univ hired me fulltime to be responsible for os/360 running on 360/67, used as 360/65 ... datacenter would shutdown over the weekend and I would have the whole place dedicated to myself for 48hrs straight. At one point the system rang the bell and stopped dead ... I tried all sorts of things, including re-ipl several times ... but the system would just ring the bell and stop dead. I finally hit the 1052-7 system console hard and the paper fell out. Turns out the end of the fan-fold had ran out past the "finger" that sensed out-of-paper (and system was getting unit-check, intervention required) ... but there was still enough friction that the paper didn't drop out (so it looked like there was still paper). Hard slam on the console jarred it enough ... and it was evident that console was out of paper.

Later working for IBM at the science center ... found that the CE had spare 1052-7s ... because some at the center were in the habit of slamming the console with their fist and breaking the machine. CP67 was first to have designated 2741s as alternate consoles. Then as part of 7x24 operation and dark room, system could automatic ipl and come up operational with no human present.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

specific past posts mentioning cp67 and dark room operation
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021k.html#42 Clouds are service
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2018f.html#16 IBM Z and cloud
https://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017g.html#46 Windows 10 Pro automatic update
https://www.garlic.com/~lynn/2016f.html#62 remote system support (i.e. the data center is 2 states away from you)
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2015b.html#18 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014m.html#113 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2013j.html#23 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#19 Where Does the Cloud Cover the Mainframe?
https://www.garlic.com/~lynn/2013c.html#91 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012k.html#41 Cloud Computing
https://www.garlic.com/~lynn/2012i.html#88 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012f.html#42 Oh hum, it's the 60s and 70's all over again
https://www.garlic.com/~lynn/2012f.html#27 Indirect Bit
https://www.garlic.com/~lynn/2012e.html#83 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011f.html#6 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011e.html#84 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2010n.html#42 Really dumb IPL question
https://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted
https://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted
https://www.garlic.com/~lynn/2006c.html#22 Military Time?
https://www.garlic.com/~lynn/2002l.html#62 Itanium2 performance data from SGI

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75
Date: 05 Jan 2022
Blog: Facebook
originally announced as 360/60 & 360/70 ... with 8byte, interleaved, 1msec memory, before ship, memory changed to 750nsec memory and models changed to 360/65 & 360/75 .... then 360/67 was added ... basically 360/65 with virtual memory hardware (at least single processor). Originally 360/67 was announced as up to four processors ... but had completely multi-ported memory and all processors could addess all channeles (compared to two processor 360/65MP). Mostly only two processor 360/67s shipped except for a special three processor 360/67 for the USAF/Lockheed MOL (manned orbital laboratory) project. The multiprocessor 360/67 multi-ported memory had a "channel director" and the control registers allowed sensing of switch settings on the channel director (can be seen in funcchar description, was designed for up to four processor configuration) ... the special three processor 360/67 also allowed for changing the channel director switch settings by changing values in the control registers.

functional characteristics on bitsaver 360/65, 360/67, 360/75 (and others)
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

funcchar also gives instruction timings, 2byte instructions are avg 1/4th of the 8byte instruction fetch time, plus instruction execution; four byte instructions are avg of 1/2 of the 8byte instruction fetch time, plus instruction execution; six byte instructions are avg of 3/4th of the 8byte instruction fetch time, plus instruction execution. 360/67 when running in virtual memory mode, also have added address translation time for each memory access. 360/67 multiprocessor, multi-ported memory also have added processing time for the multi-ported memory operation. Note a straight 360/67 "half-duplex" (multiprocessor hardware and multi-ported memory but only has one processor installed) can show slower throughput (instruction timings) than a 360/67 "simplex" (or when operating as 360/65, than a 360/65) because of the added multi-ported memory latency. However, under heavy I/O load, a half-duplex could show higher throughput because of reduced interference between I/O and processor memory accesses (and in two processor 360/67, running as 360/65MP, higher throughput than real 360/65MP, because of reduced memory contention).

some past posts mentioning 360/67 functional characteristics
https://www.garlic.com/~lynn/2021j.html#0 IBM Lost Opportunities
https://www.garlic.com/~lynn/2019d.html#67 Facebook Knows More About You Than the CIA
https://www.garlic.com/~lynn/2019d.html#59 IBM 360/67
https://www.garlic.com/~lynn/2018e.html#20 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017k.html#39 IBM etc I/O channels?
https://www.garlic.com/~lynn/2017k.html#13 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017j.html#85 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017c.html#44 The ICL 2900
https://www.garlic.com/~lynn/2017.html#74 The ICL 2900
https://www.garlic.com/~lynn/2016d.html#103 Multithreaded output to stderr and stdout
https://www.garlic.com/~lynn/2014g.html#90 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2013m.html#42 US Naval History Conference
https://www.garlic.com/~lynn/2013b.html#14 what makes a computer architect great?
https://www.garlic.com/~lynn/2012f.html#37 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2012d.html#65 FAA 9020 - S/360-65 or S/360-67?
https://www.garlic.com/~lynn/2012d.html#22 Hardware for linked lists
https://www.garlic.com/~lynn/2011k.html#86 'smttter IBMdroids
https://www.garlic.com/~lynn/2011k.html#62 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011j.html#10 program coding pads
https://www.garlic.com/~lynn/2011g.html#4 What are the various alternate uses for the PC's LSB ?
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2010l.html#2 TSS (Transaction Security System)
https://www.garlic.com/~lynn/2010i.html#6 45 years of Mainframe
https://www.garlic.com/~lynn/2010g.html#78 memory latency, old and new

--
virtualization experience starting Jan1968, online at home since Mar1970

DEC VAX, VAX/Cluster and HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: DEC VAX, VAX/Cluster and HA/CMP
Date: 05 Jan 2022
Blog: Facebook
In the wake of the IBM Failed FS Project (in the 1st half of the 70s), there was mad rush to get stuff back into 370 product pipelines ... including kicking off the quick&dirty 3033&3081 efforts in parallel. The head of POK then managed to convince corporate to kill the vm370 project, shutdown the development group and transfer all the people to work on MVS/XA (on the excuse that otherwise MVS/XA wouldn't ship on time). They weren't planning on telling the people until just before the move (to minimize the number that might escape the transfer), however it managed to leak and several people managed to escape. There is joke that the head of IBM POK was one of the largest contributors to the infant DEC VAX project. There was also a witch hunt for who leaked the information (fortunately for me, nobody gave me up). Endicott manages to save the VM370 product mission, but had to reconstitute a development group from scratch.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

IBM 4300s sold into the same mid-range market as VAX ... and in about the same numbers, for at least small unit orders. The big difference was large corporations with orders of several hundred vm/4300s for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsuanmi). Old archived post with a decade of VAX sales, sliced&diced by model, year, US/non-US ... as can be seen, by the mid80s, the mid-range market was starting to shift to workstations & large PC servers.
https://www.garlic.com/~lynn/2002f.html#0

Trivia: 1989, we got the HA/6000 project, originally to move NYTimes newspaper system (ATEX) off DEC VAX Cluster to IBM RS/6000. I renamed it HA/CMP after starting work on technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (oracle, ingres, informix, sybase). The RDBMS vendors had vax/cluster and unix support in the same source base and to ease their port to HA/CMP had done an API with VAX/cluster semantics. Ingres especially also contributed a lot of suggestions on how to make throughput and efficiency improvements over the VAX/cluster implementation (since I was doing implementation from scratch). Jan1992 had meeting in (Oracle CEO) Ellison's conference room on cluster scale-up (16 processor by mid1992, 128 processor by ye1992) ... old archived post on meeting
https://www.garlic.com/~lynn/95.html#13

within a few weeks of that meeting, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

DEC VAX, VAX/Cluster and HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: DEC VAX, VAX/Cluster and HA/CMP
Date: 05 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#7 DEC VAX, VAX/Cluster and HA/CMP

during FS, internal politics was killing off 370 efforts ... then as FS was imploding there was an ADTECH (advanced technology) conference ... the 801/risc group presented 801/risc and the CP.r system (later evolves into displaywriter follow-on ... when that was killed, redirected to unix workstation market and got the company that had done AT&T unix port to IBM/PC for PC/IX, to do one for 801/risc ROMP, released as PC/RT & AIX) and we presented 16 processor 370 (tightly-coupled) multiprocessor (everybody thot 16-way 370 was great until somebody told head of POK that it could be decades before the POK favorite son operating system had effective 16-way support, e.g. ibm doesn't ship 16-way until 2000 with z900). With FS implosion, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 projects in parallel. Another side-effect many of the company's ADTECH groups were also thrown into the 370 development breach.

FS posts
https://www.garlic.com/~lynn/submain.html#futuresys

I've claimed that the I held the first ADTECH conference (after the FS implosion) 4&5Mar1982 ... which included presentation on VM/IX effort ... old archived post with agenda for that conference:
https://www.garlic.com/~lynn/96.html#4a

There was an effort to get IBM to make an offer to the person at Princeton that did UNIX port to 370 ... didn't happen and he was hired by AMDAHL when he graduates ... doing "GOLD" (i.e. Au, or Amdahl Unix). Also IBM Palo Alto was working on doing UC Berkeley BSD Unix port to 370 (but then got redirected to do the port instead of PC/RT, released as "AOS" ... as alternative to AIX. IBM Palo Alto was also working with UCLA on a port of their "UNIX" (Locus) ... eventually released as AIX/370 & AIX/386 (no relationship to AWD workstation AIX). Both Amdahl Unix and AIX/370 tended to run in VM370 virtual machine. The claimed issue was that field engineering wouldn't support system that didn't have advanced EREP ... and added advanced EREP to UNIX was many times the effort of straight UNIX port to 370 ... field engineering would accept VM370 EREP (with UNIX running in virtual machine).

Some old email to me about unix on 370
https://www.garlic.com/~lynn/2007c.html#email850108
some email from me
https://www.garlic.com/~lynn/2007c.html#email850712
and later email to me
https://www.garlic.com/~lynn/2007c.html#email861209
and response
https://www.garlic.com/~lynn/2007c.html#email861209b
... it mentions "PAM", paged-mapped CMS filesystem that I originally did for CP67/CMS ... and then ported to VM370/CMS.

CMS paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
801/risc posts
https://www.garlic.com/~lynn/subtopic.html#801

... other trivia ... we had done some similar VM changes for (original sql/relational) System/R implementation (shared segments between supervisor System/R and worker address spaces). The tech transfer of System/R to Endicott ... it was modified so it didn't require VM changes for SQL/DS.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

some recent posts mentioning 16-way 370 SMP
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021.html#1 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2019e.html#146 Water-cooled 360s?
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2018c.html#61 Famous paper on security and source code from the '60s or '70s
https://www.garlic.com/~lynn/2018b.html#53 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017f.html#99 IBM downfall
https://www.garlic.com/~lynn/2017e.html#35 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#71 Software as a Replacement of Hardware
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2017c.html#30 The ICL 2900

SMP/multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitol rioters' tears, remorse don't spare them from jail

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitol rioters' tears, remorse don't spare them from jail
Date: 06 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#2 Capitol rioters' tears, remorse don't spare them from jail

Capitol Rioter Admits False Statements to FBI, but Prosecutors Haven't Charged Him With a Felony. The Justice Department frequently charges Muslims with felonies for making false statements to federal agents.
https://theintercept.com/2022/01/03/capitol-riot-january-6-proud-boys-fbi/
Trump 'needs to be in prison' for Jan. 6 riot, says partner of fallen Capitol Police officer Brian Sicknick
https://www.washingtonpost.com/national-security/2022/01/04/trump-capitol-riot-sicknick-garza-prison/

note even planning is prosecutable criminal sedition, doesn't have to be executed ... but back in the 30s, congress claims it as a loophole for influential friends.

Smedley Butler's "War Is A Racket"
https://en.wikipedia.org/wiki/War_Is_a_Racket
... and "perpetual war" is preferred over actually winning.
https://en.wikipedia.org/wiki/Perpetual_war

Smedley Butler, retired USMC major general and two-time Medal of Honor Recipient
https://en.wikipedia.org/wiki/Smedley_Butler

American fascists (wall street bankers, special interests, oligarchs) invited Smedley to lead military overthrow of the US Gov. ... and he blew the whistle
https://en.wikipedia.org/wiki/Business_Plot

In the last few weeks of the committee's official life it received evidence showing that certain persons had made an attempt to establish a fascist organization in this country. No evidence was presented and this committee had none to show a connection between this effort and any fascist activity of any European country. There is no question that these attempts were discussed, were planned, and might have been placed in execution when and if the financial backers deemed it expedient.

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

past posts specifically mentioning "business plot":
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#32 Fascism
https://www.garlic.com/~lynn/2019e.html#145 The Plots Against the President
https://www.garlic.com/~lynn/2019e.html#112 When The Bankers Plotted To Overthrow FDR
https://www.garlic.com/~lynn/2019e.html#107 The Great Scandal: Christianity's Role in the Rise of the Nazis
https://www.garlic.com/~lynn/2019e.html#106 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#96 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#91 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#63 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2016c.html#79 Qbasic
https://www.garlic.com/~lynn/2016.html#31 I Feel Old

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75
Date: 06 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#6 360/65, 360/67, 360/75

a decade ago, I was asked by a IBM customer if I could track down the decision to move all 370s to virtual memory. Eventually found assistant to executive involved ... old archived post with pieces of his reply
https://www.garlic.com/~lynn/2011d.html#73

basically MVT storage management was so bad that regions had to be specified four times larger than actually used ... as a result a typical 1mbyte 370/165 would only support four concurrent regions ... not enough to keep the processor utilized (and/or justified). Moving to 16mbyte virtual memoy would allow the number of regions to be increased by a factor of four times with little or no paging.

During this period would periodically drive down from cambridge to pok, and I would find Ludlow doing VS2 prototype on duplex 360/67 in 706. Most of VS2/SVS was adding little bit of code to create a single 16mbyte virtual address space and running MVT in it (little different than running MVT in a CP67 16mbyte virtual machine). The biggest hack was slipping a copy of CP67 CCWTRANS into the side of SVC0/EXCP (i.e. creating a copy of the passed channel program ... converting virtual addresses to real).

some other posts posts referring to 2011d.html#73 post
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#25 Execute and IBM history, not Sequencer vs microcode
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021e.html#32 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019d.html#120 IBM Acronyms
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2019.html#78 370 virtual memory
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017e.html#19 MVT doesn't boot in 16mbytes
https://www.garlic.com/~lynn/2016h.html#45 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015b.html#50 Connecting memory to 370/145 with only 36 bits

--
virtualization experience starting Jan1968, online at home since Mar1970

Home Computers

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Home Computers
Date: 06 Jan 2022
Blog: Facebook
convoluted set of circumstances after leaving IBM, they let me have a RS6000/320 ... then the executive we reported to (when we were doing IBM HA/CMP), had left and was president of MIPS (after SGI had bought MIPS) and he let me have his executive INDY ... so had big home desk with screens&keyboards for 6000/320 (megapel), Indy and PC/486. Then for doing both PC & Unix device drivers for pagesat satellite modem and a boardwatch magazine article ... I got full usenet (satellite) feed.

HA/CMP had started out as HA/6000 for NYTimes to move their newspaper (ATEX) system off VAX/cluster to IBM. I renamed it HA/CMP (High Availability Cluster Multi-Processing) when I started working on technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (oracle, sybase, ingres, informix) ... they had vax/cluster and unix in the same source base, so to simplify the problems, I did an API that implemented the vax/cluster semantics .... they also (mostly Ingres) had many suggestions to improve vax/cluster short comings. Archived post mentioning cluster scale-up meeting in (Oracle CEO) Ellison's conference room in Jan1992 (16 processor by mid1992 and 128 processor by ye192)
https://www.garlic.com/~lynn/95.html#13

within a few weeks of the Ellison meeting, cluster scale-up was transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. A few months later we leave IBM.

Part of the problem was the mainframe DB2 group were complaining that if we were allowed to go ahead, it would be years ahead of them (and greatly exceed in any throughput that they were capable of).

trivia: decade earlier I had been doing some of the work on the original sql/relational, System/R (had been originally done on VM/370 370/145). Then because the company was preoccupied on the next great DBMS "EAGLE" (follow-on to IMS), was able to do technology transfer to Endicott for SQL/DS. When "EAGLE" finally implodes, there was request for how fast could System/R be ported to MVS ... which is finally released as DB2 (originally for decision support only).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Programming Skills

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Programming Skills
Date: 07 Jan 2022
Blog: Facebook
In the late 70s and early 80s, I was blamed for online computer conferencing (precursor to social media) on the internal network (larger than apranet/internet from just about the beginning until sometime mid/late 80s ... part of the folklore is that when the corporate executive committee was told about it, 5of6 wanteed to fire me.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Part of the results was a researcher was paid to sit in the back of my office for nine months, taking notes on how I communicated, face-to-face, phone calls, went with me to meetings. They also got copies of all my incoming and outgoing email and logs of all my instant messages. The result was research reports, conference papers, books, and Stanford Phd (joint with language and computer AI). The researcher had been ESL teacher before going back to get Phd and one of the observations was I didn't use English as native speaker ... as if I had some other natural language (which I didn't). However, in various "hacker" get togethers we've talked about thinking (& dreaming) in computer terms (aka analogous to people that are proficient in natural language) ... see computers, computer languages, hardware ... as primary concepts ... and then construct language statements to use them.

computer conferencing (computer mediated communication) posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

At the univ, I had taken two semester hr intro to computers/fortran ... eventually had to produce fortran program with 30-40 cards. student keypunch room had 20 or so 026 keypunch machines ... with 407 in the middle of the room with plug-board setup to just produce listing of card deck ... sometimes there was line waiting for available keypunch. univ. datacenter had 709/1401 ... 709 tape->tape ... standard student job took less second to compile and execute. 1401 unit record front-end doing tape<->unit record. at the end of the semester got student programming job ... reimplementing 1401 MPIO (tape<->unit record) on 360/30 ... given assembler manual, princ-of-ops, hardware manuals ... had to learn 360 from the manuals and got to design/implement monitor, device drivers, interrupt handlers, error handling, storage management, etc. After a few weeks had box (2000) of cards (big change from my job the previous summer where I was foreman on construction job with three nine-man crews). Datacenter was shutdown over the weekend and I had the whole place to myself, although 48hrs w/o sleep made monday morning classes a little hard. Univ. had been sold 360/67 for tss/360 to replace 709/1401 ... 360/30 was temporary replacement for 1401 until the 360/67 arrived. TSS/360 never came to production fruition and so 360/67 ran as 360/65 with os/360. Within year of taking intro class, was hired fulltime to be responsible for os/360 (and continued to have datacenter dedicated over the weekends, and any monday morning class never got any better. Those student fortran jobs that took <sec on 709 were now taking over a minute on 360/65 os/360. I install HASP and it cuts time in half. I then start with release 11 completely redoing STAGE2 SYSGEN to optimize order and placement of datasets and PDS members for arm seek and multi-track search ... cutting student fortan by another 2/3rds to 12.9secs. 360/65 student fortran never beat 709 until installed Univ. of Waterloo WATFOR (one step monitor, collect a whole tray of cards and ran them all in one step).

... the 48hrs dedicated on weekends somewhat seen akin to culture/language deep immersion ... you begin to think and process directly in computer components and languages.

Last week Jan1968, three people from Science Center came out ot install CP67/CMS at the univ (3rd installation after science center itself and MIT lincoln labs). It never got to level of processing administration batch cobol ... and so was mostly restricted to me playing with it on weekends, got to rewrite a lot of code and implement a lot of new function. Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter possibly largest in the world, couple hundrd million in IBM 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. By comparison, CFO had a small machine room up at Boeing field with 360/30 for payroll ... and lots of politics between CFO and head of Renton datacenter. They did enlarge the 360/30 machine room and installed a 360/67 for me to play with when I wasn't doing other stuff.

CP/67 & 360/67 trivia:
https://en.wikipedia.org/wiki/CP-67
https://en.wikipedia.org/wiki/CP/CMS

After I graduate, I join the IBM science center (instead of staying at boeing), one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. The CP370 description in above is somewhat garbled ... it actually started as a joint ("distributed" using early internal network links) project with Endicott to implement 370 virtual machines on CP/67 running on 360/67 ... part of the effort included implementing the CMS multi-level source update support. Running on the real machine was one of my production CP67L systems and in a 360/67 virtual machine would run CP67H which had the updates to select 370 virtual machine (instead of 360). This added level was because the production CSC CP67 system had non-IBM users (students, staff, professors) from the Boston area ... and 370 virtual memory hadn't been announced yet. Then there where CP67I updates to CP67H where CP67 ran in a CP67H 370 virtual machine (rather than 360/67). CP67I was in regular use a year before the first engineering 370 supporting virtual memory could IPL ... in fact IPL'ing CP67I was used as a test for that first engineering machine. Along the way, three people from San Jose came out and added 2305 and 3330 device support for what was CP67SJ ... and was widely used on lot of internal 370 virtual memory machines (well before VM370 became available).

The VM370 wikipedia reference implies it was derived from CP370 ... but in fact VM370 was nearly a complete rewrite ... simplifying and/or dropping a lot of CP67 function/features (including multiprocessor support and a lot of stuff I had done as an undergraduate). Old email about spending much of 1974 adding CP67 features into VM370 for my production CSC/VM for internal datacenters.
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

The implosion of FS and the mad rush to get stuff back into the 370 product pipelines ... contributed to decisions to start releasing some of my work to customers.

Future System Posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe I/O
Date: 08 Jan 2022
Blog: Facebook
There is big technology overlap in I86 server blades for supercomputers and big cloud megadatacenters ... and programming can be similar to what is used in window PCs ... but they are almost all driven by linux systems (in part started out needed full source to adapt the systems to extremely efficient operation in clustered environments with hundreds of thousands of blade systems). Such blades will tend to have ten times the processing of max configured mainframe and driven at 100% utilization.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

In early 90s, I was working on something similar for 128 processor rs/6000 and commercial cluster scale-up ... and mainframe people were complaining that if I was allowed to continue it would be years ahead of them and many times the throughput. Then w/o warning, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we weren't allowed to work on anything with more than four processors (we leave IBM a few months later).

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

The supercomputer schedule for sp1 was initally based on our 128 processor high availability commercial cluster scale-up ... then they started changing all sorts of things (but by then we were long gone). Not long before, I had been asked to write a section for the IBM corporate continuous availability document .. but it got pulled when both Rochester (as/400) and POK (mainframe) complained they couldn't meet the objectives

disaster survivable, geographic survivable and/or availabilty posts
https://www.garlic.com/~lynn/submain.html#available

Trivia: as undergraduate in the 60s, within a year of taking 2 semester hr to intro to fortran/computers, the univ hires me fulltime to be responsible for OS/360 ... and get to do a lot of ibm software. Along the way, the univ. library gets an ONR grant to do an online catalog, part of the money goes for 2321 datacell. Project is selected for IBM betatest for the original CICS product ... and debugging CICS is added to my duties (CICS still had some number of glitches).

posts mentioning CICS &/or BDAM
https://www.garlic.com/~lynn/submain.html#cics

The first place I saw the myth (about mainframe channel throughput) really appear was with 3090. 3090 had initially sized the number of channels for balanced system throughput based on the assumption that 3880 controller was similar to 3830 controller but with 3mbyte/sec transfer (& 3mbyte/sec 3380 disks). However, the 3880 controller had a really slow microprocessor (with special hardware path for data transfer) which enormously drove up channel busy (compared to what a 3830 w/3mbyte would have been). When they realized how bad the 3880 really was, they had to significantly increase the number of channels (in order to meet throughput objectives) offsetting the significant 3880 channel busy ... the increased channels required an extra TCM ... and 3090 people joked that they would bill the 3880 group for the 3090 increase in manufacturing costs. Marketing respun the increase in channels as extra ordinary I/O machine (as opposed to being required to offset the 3880 controller problems and meet the original throughput objectives).

In 1980, STL was bursting at the seams and were moving 300 people from the IMS group to offsite bldg. They had tried "remote 3270" and found the human factors unacceptable. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, LLNL (lawrence livermore national laboratory) is playing with some serial stuff and I'm asked to help them get it standardize, which quickly becomes fibre-channel standard (including some stuff I had done in 1980). The POK people finally get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (i.e. 17mbytes/sec, FCS started 1gbit/sec link full-duplex, 2gbit/sec aggregate, 200mbyte/sec). Then some POK people become involved in FCS and define a heavy weight protocol that drastically reduces the native throughput that is eventually released as FICON.

The most recent published "peak I/O" benchmark I can find is for max configured z196 getting 2M IOPS with 104 FICON (running over 104 FCS) ... using emulated CKD disks on industry standard fixed-block disks (no real CKD disks made for decades). About the same time there is a FCS announced for E5-2600 blades (standard in cloud megadatacenters at the time) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS) using industry standard disks.

FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe I/O

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe I/O
Date: 08 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O

getting play disk engineer topic drift: I transferred to IBM San Jose Research in the 2nd half of the 70s and got to wander around IBM and customer datacenters in silicon valley. One of the places was bldg 14 (disk engineering) and 15 (disk product test) across the street. At the time they were doing all disk mainframe testing, prescheduled, stand-alone, 7x24. They mentioned that they had recently tried to run MVS, but found in that environment, it had 15min mean-time-between-failure (requiring manual re-ipl). I offered to do a I/O supervisor that was bullet proof and never fail so they could do any amount of on-demand, concurrent testing. The downside was that when they had a problem, they got into the habit of calling me and l had to spend increasing amount of time playing disk engineer.

Bldg 15 tended to get very early engineering new systems for disk i/o testing ... and got #3 (or possibly #4) engineering 3033. We found a couple strings of 3330 drives and 3830 controller and put up our own private online service (in part since channel I/O testing only involved percent or two of processing). One monday morning, I get a call from bldg 15 asking what did I do over the weekend to make interactive response go to pieces. I said "nothing", what did they do. It eventually came out that somebody had replaced our 3830 controller with early 3880 and we started to see how slow the 3880 really was. There was then all sort of microcode tweaking to try mask the 3880 slow processing ... but never could really eliminate it (eventually leading to 3090 having to significantly increase in channels to compensate for the increase in 3880 channel busy).

getting to play disk engineering posts:
https://www.garlic.com/~lynn/subtopic.html#disk

channel extender trivia: in STL the 3270 channel attached controllers were spread around all the disk channels. For channel extender, had a new box directly attached to channels that was enormously faster (and much less channel busy) than 3270 controllers for the same operations ... the reduction in channel busy, significantly improved the disk I/O throughput ... and increased overall system throughput by 10-15%.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe I/O
Date: 09 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O

One of my hobbies after joining IBM (cambridge science center) was enhanced operating systems for internal datacenters (including the IBM world-wide, online sales&marketing HONE systems were long time customer) ... but still got to attend SHARE and visit customers. The director of one of the largest financial datacenters on the east coast like me to stop in and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl machine (up until then had been selling into technical/scientific, univ. markets but had yet to break into true-blue commercial market and this would be the first, see above about during FS, 370 products were being killed which gave the clone 370 makers their market foothold). I was then asked to go spend a year onsite at the customer (to help obfuscate why an Amdahl machine was being ordered), I talk it over with the customer and they said they would like to have me onsite, it wouldn't make any difference about the order, and so I told IBM no. I was then told that the branch manager was good sailing buddy of the IBM CEO, and if I refused, I could forget about having an IBM career, promotions, raises. Not long later, I transfer to IBM San Jose Research on the opposite coast (got to wander around most of silicon valley, ibm datacenters, customers, other computer makers).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys

I mentioned increasing getting to play disk engineer in bldg 14&15 across street. They get early engineering 3033 for disk&channel i/o testing. They also get early 4341 for testing. Jan1979, I'm con'ed into doing some benchmarks on the 4341 (before first customer ship) for national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami ... decade before doing HA/CMP and working with national labs on cluster scale-up).

playing disk enginner posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

4300s sell into the same mid-range DEC VAX market in similar numbers for small unit numbers, the difference was large corporations with orders of several hundred 4300s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). A decade of DEC VAX sales, sliced and diced by year, model, US/non-US ... can be seen that by the mid-80s, mid-range market was starting to move to workstations and large server PCs:
https://www.garlic.com/~lynn/2002f.html#0

4300s (& VAX) price/performance dropped below some tipping point. Also, the 4300 foot print, reduced environmental, not needing datacenter provisioning made it possible for both cluster computing in datacenters as well as distributed computing in departmental areas (inside IBM the deployment in departmental areas created a scarcity of conference rooms). Folklore is at one point POK 3033s were so threatened by vm/4341 clusters, that the head of POK convinced corporate to cut allocation of critical 4341 manufacturing component in half.

trivia: the national lab benchmark is from cdc 6600 decade earlier ran in 35.77secs, the engineering 4341 is 36.21secs (but turns out the early engineering 4341 processor cycle was nearly 20% slower than the production machines later shipped to customers)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Clone Controllers

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Clone Controllers
Date: 09 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2021k.html#124 IBM Clone Controllers

trivia: Gore was involved getting NII passed ... more akin to the original purpose for NSFnet (before it evolved into NSFNET backbone & precursor to modern internet). Old post with preliminary announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

other trivia: In 1991 I was participating in the NII meetings at LLNL ... including working with (NSC VP) Gary Christensen
https://en.wikipedia.org/wiki/National_Information_Infrastructure

I was also doing HA/CMP product and working with LLNL and other national labs on technical/scientific cluster scale-up, along with porting the LLNL filesystem to HA/CMP. Old email about not being able to make a LLNL NII meeting and Gray fills in for me and then comes by and updates me on what went on.
https://www.garlic.com/~lynn/2006x.html#email920129

within something like hrs of that email, cluster scale-up is transferred, announced as IBM supercomputer, and we are told we can't work with anything having more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

NSC
https://en.wikipedia.org/wiki/Network_Systems_Corporation
trivia: NSC was formed by Thornton, Gary and some other CDC people

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

News articles mentioning Gore in post about NSFNET RFP kickoff meeting
John Markoff, NY Times, 29 December 1988, page D1
Paving way for data 'highway' Carl M Cannon, San Jose Mercury News, 17 Sep 89, pg 1E
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?

other trivia: vendors were being asked to "donate" equipment to be part of NII "testbed". Later, Singapore invited all the US NII "testbed" participants and their costs were completely covered (as opposed to the situation in the US).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe I/O

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe I/O
Date: 10 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O

Some of the MIT 7094/CTSS (following also includes history list of IBM mainframe systems)
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr to do multics
https://en.wikipedia.org/wiki/Multics
and others went to the IBM science center on the 4th flr and did virtual machines, the internal network, lots of online and performance apps (including what evolves into capacity planning), inventing GML (precursor to SGML & HTML) in 1969, etc.
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

... and there was some amount of friendly rivalry between 4th&5th flrs. Multics sites
https://multicians.org/sites.html
including AFDSC in the pentagon
https://multicians.org/mga.html#AFDSC

old email (I had already transferred from CSC to SJR on the west coast) spring 1979, AFDS wanted to come out and talk about getting 20 VM/4341s
https://www.garlic.com/~lynn/2001m.html#email790404
https://www.garlic.com/~lynn/2001m.html#email790404b
by the time they got around to coming out in the fall of 1979, it had grown to 210

note it wasn't really fair to compare the number of MULTICS installations with the number of VM370 installation ... or even just the total number of internal VM370 installations ... so (before transferring to SJR), would compare the number of my internal CSC/VM installations which was still more than the total number of MULTICS installations.

After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters, original CP67. The VM370 development group rewrote virtual machine, including simplifying and/or dropping lots of CP67 stuff. I then spent some amount of 1974 moving a lot of stuff from CP67 to VM370, old email about CSC/VM
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Supercomputers w/creditcard

From: Lynn Wheeler <lynn@garlic.com>
Subject: Supercomputers w/creditcard
Date: 11 Jan 2022
Blog: Facebook
supercomputers w/creditcard

AWS launches new EC2 instance type for high performance computing tasks. The new Hpc6a instances are "purpose-built" to provide a cost-effective option for customers seeking cloud-based access to high-performance computing's demanding hardware requirements.
https://www.zdnet.com/article/aws-launches-new-ec2-instance-type-for-high-performance-computing-tasks/

.... decade ago articles started appearing how people could get a supercomputer spun up (could rank in the top 100 in the world) with just a credit card ... and online web interface

gone 404, dec2011: Amazon Builds World's Fastest Nonexistent Supercomputer
https://web.archive.org/web/20140621024243/https://www.wired.com/2011/12/nonexistent-supercomputer/all/1
$1,279/hr, (42nd largest supercomputer in world, 240TFLOP in 17,000 cores)
https://www.cnet.com/news/amazon-takes-supercomputing-to-the-cloud/
... 240TFLOP, 240,000BFLOP/17,000cores, 14BFLOP/core

A couple months later (Apr2012), $4,824/hr, 51,132 cores
http://arstechnica.com/business/2012/04/4829-per-hour-supercomputer-built-on-amazon-cloud-to-fuel-cancer-research/

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

posts mentioning cloud supercomputer:
https://www.garlic.com/~lynn/2016h.html#55 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2013b.html#15 A Private life?
https://www.garlic.com/~lynn/2013b.html#10 FW: mainframe "selling" points -- Start up Costs
https://www.garlic.com/~lynn/2012l.html#51 Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#42 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#34 X86 server
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2012f.html#12 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012d.html#2 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012b.html#6 Cloud apps placed well in the economic cycle
https://www.garlic.com/~lynn/2012.html#80 Article on IBM's z196 Mainframe Architecture
https://www.garlic.com/~lynn/2012.html#78 Has anyone successfully migrated off mainframes?

--
virtualization experience starting Jan1968, online at home since Mar1970

FS: IBM PS/2 VGA Moni

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: FS: IBM PS/2 VGA Moni
Newsgroups: alt.folklore.computers
Date: Tue, 11 Jan 2022 14:28:45 -1000
"Kurt Weiske" <kurt.weiske@realitycheckbbs.org.remove-yfw-this> writes:

Aside: I worked at a company quite recently that had a model 60, 8514 monitor and IBM (not Lexmark) 4019 printer with a tape drive that they used to perform restores from old archival AS/400 tapes. Still working, 25 years later.

late 70s, we had lots of discussions after work about majority of ibm management were computer illiterate ... and what could be done to improve the situation. Then at one point, there was rapid spreading rumor that members of corporate executive committee were using email ... and all of a sudden there were a rash of 3270 terminals being diverted from development projects to managers' desk (at the time 3270s were part of annual budget process and required VP level signoff ... justification for "real" development) ... and would sit all day on management desks with VM logon being burned into the screen (later the PROFS menu) ... with administrative assistent actually handling email ... part of trying to create facade that manager was computer literate.

A decade later ... managers were diverting m80+8514 from development projects to their desks (viewed as status symbol & facade of computer literacy) ... still majority actual computer use being done by administrative assistants ... and same VM Logon &/or PROFS menu being burned into the screen.

some past posts mentioning 8514 screens
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017d.html#70 IBM online systems
https://www.garlic.com/~lynn/2016g.html#89 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2014f.html#26 upcoming TV show, "Halt & Catch Fire"
https://www.garlic.com/~lynn/2013j.html#46 Feds indict indentity theft ring
https://www.garlic.com/~lynn/2013b.html#58 Dualcase vs monocase. Was: Article for the boss
https://www.garlic.com/~lynn/2012d.html#37 IBM cuts more than 1,000 U.S. Workers
https://www.garlic.com/~lynn/2011d.html#13 I actually miss working at IBM
https://www.garlic.com/~lynn/2010e.html#15 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#88 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009l.html#41 another item related to ASCII vs. EBCDIC
https://www.garlic.com/~lynn/2009f.html#66 How did the monitor work under TOPS?
https://www.garlic.com/~lynn/2003h.html#53 Question about Unix "heritage"

--
virtualization experience starting Jan1968, online at home since Mar1970

Service Processor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Service Processor
Date: 11 Jan 2022
Blog: Facebook
Claim is that service processors came with TCMs ... that FE service procedure was scoping bootstrap ... no longer able to scope circuits in TCM ... so service processor for 3081 was UC microprocessor (with RYO system) ... with lots of probes into TCM. Service processor for 3090 (3092)
https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

started out as 4331 with highly modified version of VM370/CMS release 6 and all the screens done in CMS IOS3270 ... changed to pair of (redundant) 4361s ... trivia the 3092 also references a pair of 3370s FBA disks (even for MVS installations which never had FBA support).

3274 trivia: when they first appeared ... they frequently required (manual) re-IML (reset) ... we fairly quickly figured out that if you quickly hit every 3274 subchannel address with HDV/CLRIO ... it would re-IMPL itself.

3090s eventually got (Amdahl like hypervisor, but several years later, 88 instead of early 80s) PRSM/LPAR ... i.e. multiple "logical" machines in single physical machine.
https://en.wikipedia.org/wiki/PR/SM

recent 3092 posts:
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021j.html#84 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019e.html#120 maps on Cadillac Seville trip computer from 1978
https://www.garlic.com/~lynn/2019e.html#0 IBM HONE
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019b.html#82 TCM
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019.html#76 How many years ago?
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2018b.html#6 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#43 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017e.html#16 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#94 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#89 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#88 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#81 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2017.html#88 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970

Departmental/distributed 4300s

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Departmental/distributed 4300s
Date: 12 Jan 2022
Blog: Facebook
4300 come out in 1979 and had big explosion in sales .... both in datacenter, datacenter cluster and in distributed computing (large corporations ordering hundreds at a time for putting out in departmental areas). 4300s sold into the same DEC/VAX mid-range market and in similar numbers (for small unit orders, big difference was big corporations ordering hundreds at a time). Old post with decade of dec/vax sales, sliced&diced by year, model and US/non-US (ignore the microvax for this comparison), it can be seen by the mid-80s, the mid-range market was moving to workstations and large PCs.
https://www.garlic.com/~lynn/2002f.html#0

... trivia vtam side-note; doing MVS performance analysis, there was "capture-ratio" ... the difference between all the acutal accounted for CPU use and the elapsed time minus wait state (actual total CPU use) ... i.e. "accounted for cpu use" divided by ("elapsed time" minus "wait state") ... which could be as low was 40%. It seemed to be the lower capture ratios were associated with higher VTAM use. Sometimes it came into play when management assumed "accounted for cpu use" actually was "total cpu use" ... when doing capacity planning and migration to different proceessors.

On cottle road, the big bldg26 datacenter was bursting at the seems with large mainframes and looking at offloading some of the MVS work to departmental MVS/4300s (instead of VM/4300s) ... one of the things they didn't account for was the significant VTAM cpu (not accounted for) ... the other was the enormous human resources to support MVS .... typical distributed VM/4300s was how many tens of VM/4300s could be supported by single person (as opposed to how many tens of people were required to support MVS system). A major GPD MVS application required more OS/360 services than provided by the 64kbyte OS/360 simulation in CMS ... however some work out in Los Gatos/bldg29, found that with 12kbytes more in OS/360 simulation code, they could transfer most of the remaining MVS applications that previously wouldn't port to CMS (joke that the CMS 64kbyte OS/360 was more efficient than the MVS 8mbyte OS/360 simulation)

Later, UNIX folks did some analysis comparing TCP pathlengths with MVS VTAM pathlengths .... finding TCP had approx. 5k instructions to do an 8kbyte NFS operation ... compared to MVS VTAM having 160k instruction to do the approx. similar lu6.2 operation.

past posts mention capture ratio &/or VTAM 160k instruction pathlengths
https://www.garlic.com/~lynn/2021j.html#1 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021c.html#88 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017i.html#73 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017d.html#51 CPU Timerons/Seconds vs Wall-clock Time
https://www.garlic.com/~lynn/2015f.html#68 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014b.html#102 CPU time
https://www.garlic.com/~lynn/2014b.html#85 CPU time
https://www.garlic.com/~lynn/2014b.html#82 CPU time
https://www.garlic.com/~lynn/2014b.html#80 CPU time
https://www.garlic.com/~lynn/2014b.html#78 CPU time
https://www.garlic.com/~lynn/2013d.html#14 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#8 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012j.html#71 Help with elementary CPU speed question
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2010m.html#39 CPU time variance
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2008d.html#72 Price of CPU seconds
https://www.garlic.com/~lynn/2008.html#42 Inaccurate CPU% reported by RMF and TMON
https://www.garlic.com/~lynn/2007t.html#23 SMF Under VM
https://www.garlic.com/~lynn/2007g.html#82 IBM to the PCM market
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
https://www.garlic.com/~lynn/2004o.html#60 JES2 NJE setup

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM IBU (Independent Business Unit)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM IBU (Independent Business Unit)
Date: 12 Jan 2022
Blog: Facebook
801/risc ROMP was suppose to be displaywriter follow-on ... when that got canceled, they decided to retarget to unix workstation market ... group in Austin was moved to unix workstation "IBU" ... to be free of the heavy/onerous IBM bureaucracy However, all the various Austin plant site bureaucracy would claim IBU might be free from other bureaucracy, but not theirs (and IBUs weren't staffed to handle normal IBM bureaucracy). Went downhill from there, example was for PC/RT (at-bus), AWD did their own 4mbit token-ring card. For RS/6000 w/microchannel, AWD was directed (by senior executive VP) that AWD had to use the performance knee-capped PS2 cards (and couldn't do high-performance workstation cards). Example was the PS2 16mbit T/R microchannel card had lower card throughput than the PC/RT 4mbit T/R card.

801/risc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning Learson's Bureaucacy Management Briefing:
https://www.garlic.com/~lynn/2021g.html#51 Intel rumored to be in talks to buy chip manufacturer GlobalFoundries for $30B
https://www.garlic.com/~lynn/2021g.html#32 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021e.html#62 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021d.html#51 IBM Hardest Problem(s)
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2017j.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2017f.html#109 IBM downfall
https://www.garlic.com/~lynn/2017b.html#56 Wild Ducks
https://www.garlic.com/~lynn/2015d.html#19 Where to Flatten the Officer Corps
https://www.garlic.com/~lynn/2013.html#11 How do we fight bureaucracy and bureaucrats in IBM?

IBU trivia: I took two semester hr intro to fortran/computers and then within a year, the univ. hires me fulltime to be responsible for OS/360. Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton was possibly largest datacenter in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between director of renton datacenter and the CFO ... who just had small machine room up at Boeing field for 360/30 used for payroll (although they enlarged the machine room and installed a 360/67 for me to play with, when I wasn't doing other stuff). 747-3 was flying skies of Seattle getting FAA flt certification. They had disaster plan to replicate Renton up at the new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter, some analysis that the cost to Boeing being w/o Renton for the recovery period would be more than replicating Renton). When I graduate, I join IBM science center instead of staying at Boeing.

recent posts mentioning Boeing Computer Services IBU
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021h.html#46 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#5 Availability
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2020.html#29 Online Computer Conferencing
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2019.html#54 IBM bureaucracy

--
virtualization experience starting Jan1968, online at home since Mar1970

Target Marketing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Target Marketing
Date: 13 Jan 2022
Blog: Facebook
After leaving IBM in early 90s, we did a lot of work in financial industry ... one project was target marketing program that they wanted to offer the service to its credit card merchants. It was financial outsourcing company that handled about half of the credit card accepting (acquiring) merchants in the US ... as well as all processing for 500M consumer credit card accounts (issuing). Objective to keep 18m summary of credit card purchases for consumer accounts ... and use it to make target marketing offers to credit card users (based on their past credit card purchases). When we got involved the small pilot involving 16M accounts was floundering ... doing traditional RDBMS updates ... and we estimate even small scale-up ... the nightly updates could take a week and the monthly offer evaluations could take months (datacenter had >40 max configured IBM mainframes, none older than 18months, constant rolling upgrades, number needed to finish financial settlement in the overnight batch window).

As undergraduate, univ. hired me fulltime to be responsible for OS/360 ... they were transitioning from 709 tape->tape where student jobs took less than second each to 360/65 where student jobs took over a minute each. I installed HASP and that cut it in half ... but there was still an enormous amount of random disk arm activity. I did highly customized SYSGENs to carefully place datasets and PDS members for optimized arm seek and multi-track search which cut it another 2/3rds to 12.9secs. 360/65 never beat 709 until I installed Univ. of Waterloo WATFOR ... single step monitor, feed it card trays of student jobs that it would spit out and got time per student job down to less than 709 (with little disk arm random access).

Anyway, got a large multiprocessor Sequent machine (before IBM bought them and shut them down)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
with ESCON and 3590 tape drives.
https://en.wikipedia.org/wiki/IBM_3590
Evening processing would sort all the days credit card transactions by account number and then serially read input tape (previous day's output tape), add any new transactions to account record and write to (new) output tape ... easily handling even peak daily credit card transaction (which were seasonal around winter holidays). Monthly processing read latest output tape and matched target marketing criterias to each account, outputing any results for that account (both daily & monthly processing was highly pipelined with lots of tape and processing activity going on concurrently, objective was being able to run tapes constantly at full speed, 9mbyte/sec, full 10gbyte tape, 18.5mins).

The target marketing matching application was outsourced to software company on the Dulles access road started by several IBMers that had been involved in doing the FAA ATC system
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794

Trivia: I had been involved in SCI (used for Sequent NUMA-Q) both before and after leaving IBM
https://en.wikipedia.org/wiki/Scalable_Coherent_Interconnect

posts mentioning datacenter with >40 max configured mainframes all running same 450K statement cobol application
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#13 IBM today
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?

sci posts
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018b.html#53 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017e.html#94 Migration off Mainframe to other platform
https://www.garlic.com/~lynn/2017d.html#36 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016f.html#86 3033
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#70 Microprocessor Optimization Primer
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
https://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2015g.html#72 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#173 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#142 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#140 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014d.html#18 IBM ACS
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#78 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2013d.html#12 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2012f.html#94 Time to competency for new software language?
https://www.garlic.com/~lynn/2011p.html#122 Deja Cloud?
https://www.garlic.com/~lynn/2011l.html#11 segments and sharing, was 68000 assembly language programming
https://www.garlic.com/~lynn/2011k.html#50 The real reason IBM didn't want to dump more money into Blue Waters
https://www.garlic.com/~lynn/2011f.html#46 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#45 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011.html#59 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010j.html#2 Significant Bits
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#19 How many mainframes are there?
https://www.garlic.com/~lynn/2010f.html#50 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010f.html#49 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2010f.html#48 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2010f.html#47 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2010.html#92 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010.html#41 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010.html#31 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#5 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009p.html#71 Blast from the Past: 40 years of Multics, 1969-2009
https://www.garlic.com/~lynn/2009o.html#58 Rudd bucks boost IBM mainframe business
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009h.html#80 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2008r.html#1 What is better faster CPU speed or wider bus?
https://www.garlic.com/~lynn/2008p.html#68 "The Register" article on HP replacing z
https://www.garlic.com/~lynn/2008p.html#52 Serial vs. Parallel
https://www.garlic.com/~lynn/2008p.html#33 Making tea
https://www.garlic.com/~lynn/2008i.html#5 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008e.html#24 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008c.html#81 Random thoughts
https://www.garlic.com/~lynn/2007m.html#72 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
https://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#55 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007g.html#3 University rank of Computer Architecture
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006w.html#2 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006u.html#33 Assembler question
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006q.html#9 Is no one reading the article?
https://www.garlic.com/~lynn/2006q.html#8 Is no one reading the article?
https://www.garlic.com/~lynn/2006p.html#55 PowerPC or PARISC?
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006m.html#52 TCP/IP and connecting z to alternate platforms
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#7 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information
https://www.garlic.com/~lynn/2005n.html#38 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005n.html#37 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005n.html#6 Cache coherency protocols: Write-update versus write-invalidate
https://www.garlic.com/~lynn/2005n.html#4 54 Processors?
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???

--
virtualization experience starting Jan1968, online at home since Mar1970

Departmental/distributed 4300s

From: Lynn Wheeler <lynn@garlic.com>
Subject: Departmental/distributed 4300s
Date: 14 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#21 Departmental/distributed 4300s

From SNA is not a SYSTEM, not a NETWORK, and not a ARCHITECTURE

Communication group had fierce battle attempting to prevent mainframe TCP/IP support from being released. When they lost, they changed tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them; what shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I did RFC1044 support and in some tuning tests at Cray Research between 4341 and a Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor.

posts mentioning RFC1044
https://www.garlic.com/~lynn/subnetwork.html#1044

Later the communication group hired a silicon valley contractor to implement TCP/IP support in VTAM ... what he initially demo'ed had TCP/IP running much faster than LU6.2. He was then told that EVERYBODY KNOWS that a PROPER TCP/IP implementation is much slower than LU6.2 ... and they would only be paying for a "PROPER" implementation.

--
virtualization experience starting Jan1968, online at home since Mar1970

CP67 and BPS Loader

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 and BPS Loader
Date: 14 Jan 2022
Blog: Facebook
Three people from science center came out last week of Jan1968 to install CP67 (virtual machine precursor to VM370) ... it never really was production at the univ. but I got to play with it in my 48hr window when univ. shutdown datacenter on the weekend and I had it all to myself. At that time, all the CP67 source were files on OS/360 (and assembled on OS/360. The assembled text decks for the system were arraigned in a card tray with BPS loader in the front. The BPS loader would be IPLed and the CP67 initiliazing routine (CPINIT) would write the memory image to disk for "system" IPL. A few months later, they shipped a version where all the source files were on CMS and assembled on CMS.

I started by rewriting a lot of CP67 pathlengths significantly cutting time for running OS/360 in virtual machine. Old post with part of 1968 SHARE presentation on the reduction in CP67 pathlength.
https://www.garlic.com/~lynn/94.html#18

OS jobstream on bare machine 323secs, originally running in CP67 856secs, CP67 overhead 533 CPU secs. After some of pathlength rewrite runtime: 435 secs, CP67 overhead 112 CPU secs ... reducing CP67 CPU overhead from 533 to 112 CPU secs, reduction of 421 CPU secs.

I continued to do pathlength optimization along with a lot of other stuff, dynamic adaptive resource management & scheduling algorithm, new page replacement algorithms, optimized I/O for arm seek ordering and disk&drum rotation, etc. At some point, I also started reorganizing a lot of the fixed kernel to make parts of it pageable (to reduce the fixed memory requirements) and ran into a problem with the BPS loader. Part of making pieces of kernel pageable was splitting some modules up into 4k segments, which increased the number of TXT ESD entry symbols. The BPS loader had a table limit of 255 ESD entry symbols ... and I had all sorts of difficulty keeping the pageable kernel reorganization under 256 ESD symbols. Later after graduating and joining IBM Science Center, I was going through a card cabinet in the attic storage area ... and ran across the source for the BPS loader ... which I immediately collected and modified to support more than 255 ESD symbols.

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

--
virtualization experience starting Jan1968, online at home since Mar1970

Is this group only about older computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is this group only about older computers?
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2022 17:42:29 -1000
Peter Flass <peter_flass@yahoo.com> writes:

We did a lot of COBOL on a 32K 360/30. One big report program I had to code overlays. It was my first experience with overlays and I didn't do a very good job of structuring it. I'd love to take what I know now and rewrite it. PL/I was more problematic.

trivia: end of semester after taking 2 semester hr intro fortran/computers, I was hired as student programmer to reimplement 1401 MPIO (tape<->unit record) on 360/30 (64k, run os/360 PCP) .. given 360 princ-ops, assembler, bunch of hardware manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... within a few weeks had a 2000 card assembler program run "stand-alone", loaded with BPS loader. I then added assemble option that would generate version used os/360 GET/PUT/DCB macros. Stand-alone version assembled (360/30, os/360 PCP) in 30mins ... GET/PUT/DCB version assembled in an hour ... most of the added time was the DCB macros ... could watch it in the console lights when it was doing the DCB macros.

the univ. shutdown the datacenter over the weekend and I had the place all to myself, although monday morning classes could be difficult after 48hrs w/o sleep. the univ. had been sold 360/67 (for tss/360) to replace 709/1401, the 360/30 temporarily replaced 1401 in transition to 360/67. TSS/360 never came to production fruition and so ran 360/67 as 360/65 with os/360. Within year of taking intro class, I was hired fulltime responsible for os/360.

Three people from cambridge science center came out last week of Jan1968 to install CP67 (virtual machine precursor to VM370) ... it never really was production at the univ. but I got to play with it in my 48hr window when univ. shutdown datacenter on the weekend and I had it all to myself. At that time, all the CP67 source were files on OS/360 (and assembled on OS/360. The assembled text decks for the system were arraigned in a card tray with BPS loader in the front. The BPS loader would be IPLed and the CP67 initiliazing routine (CPINIT) would write the memory image to disk for "system" IPL. A few months later, they shipped a version where all the source files were on CMS and assembled on CMS.

I started by rewriting a lot of CP67 pathlengths significantly cutting time for running OS/360 in virtual machine. Old (1994 afc) post with part of 1968 SHARE presentation on the reduction in CP67 pathlength.
https://www.garlic.com/~lynn/94.html#18

OS jobstream on bare machine 323secs, originally running in CP67 virtual machine 856secs, CP67 overhead 533 CPU secs. After some of the pathlength rewrite, runtime: 435 secs, CP67 overhead 112 CPU secs ... reducing CP67 CPU overhead from 533 to 112 CPU secs, reduction of 421 CPU secs.

I continued to do pathlength optimization along with a lot of other stuff, dynamic adaptive resource management & scheduling algorithm, new page replacement algorithms, optimized I/O for arm seek ordering and disk&drum rotation, etc.

At some point, I also started reorganizing a lot of the fixed kernel to make parts of it pageable (to reduce the fixed memory requirements) and ran into a problem with the BPS loader. Part of making pieces of kernel pageable was splitting some modules up into 4k segments, which increased the number of TXT ESD entry symbols. The BPS loader had a table limit of 255 ESD entry symbols ... and I had all sorts of difficulty keeping the pageable kernel reorganization under 256 ESD symbols. Later after graduating and joining IBM Science Center, I was going through a card cabinet in the attic storage area ... and ran across the source for the BPS loader ... which I immediately collected and modified to support more than 255 ESD symbols.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

trivia: in morph of CP67->VM370 ... they greatly simplified and/or dropped lots of CP67 features, including SMP multiprocessor support and much of the stuff I had done as undergraduate in the 60s. When I joined IBM, one of my hobbies was advanced production operating systems for internal datacenters ... the datacenters were then moving off of 360/67 to increasing numbers of 370s w/vm370. I spent part of 1974 putting a lot of the dropped CP67 stuff back into VM370 until I was ready to start shipping my CSC/VM for internal datacenters in 1975.

some old email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

SMP posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe System Meter

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe System Meter
Date: 15 Jan 2022
Blog: Facebook
360s started out rented/leased ... based on system meter readings that ran whenever the processor and/or any channel was busy ... even internal machines had funny money charges.

There was lots of work on CP67 by the science center and some of the commercial online spinoffs from the science center to make machine available 7x24, dark room unattended ... as well as letting the system meter stop when things were idle ...included some special channel programs that let channels go idle ... but be able to immediately wake up whenever terminal characters were arriving.

Note: all processors and channels had to be idle for at least 400ms before the system meter would stop. trivia: years after IBM transitioned to selling machines ... MVS still had a timer task that woke up every 400ms ... making sure that the system meter would never stop.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare

some recent posts mentioning "system meter"
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021k.html#42 Clouds are service
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2018f.html#16 IBM Z and cloud
https://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2018.html#4 upgrade
https://www.garlic.com/~lynn/2017i.html#65 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017g.html#46 Windows 10 Pro automatic update
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitol rioters' tears, remorse don't spare them from jail

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitol rioters' tears, remorse don't spare them from jail
Date: 15 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#2 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail

Why is so little known about the 1930s coup attempt against FDR?
https://www.theguardian.com/commentisfree/2022/jan/11/trump-fdr-roosevelt-coup-attempt-1930s

Business leaders like JP Morgan and Irénée du Pont were accused by a retired major general of plotting to install a fascist dictator

.... snip ...

Business Plot & Prescott Bush
https://en.wikipedia.org/wiki/Business_Plot#Prescott_Bush

In July 2007, Scott Horton wrote an article in Harper's Magazine claiming that Prescott Bush, father of US President George H. W. Bush and grandfather of then-president George W. Bush, was to have been a "key liaison" between the 1933 Business Plotters and the newly emerged Nazi regime in Germany.[50

... snip ...

John Foster Dulles played major role rebuilding Germany economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:

In mid-1931 a consortium of American banks, eager to safeguard their investments in Germany, persuaded the German government to accept a loan of nearly $500 million to prevent default. Foster was their agent. His ties to the German government tightened after Hitler took power at the beginning of 1933 and appointed Foster's old friend Hjalmar Schacht as minister of economics.

loc905-7:

Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:

At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism

... snip ...

From the law of unintended consequences, when US 1943 Strategic Bombing program needed targets in Germany, they got plans and coordinates from wallstreet.

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM HONE
Date: 16 Jan 2022
Blog: Facebook
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was long time customer. HONE started with CP67 and multiple datacenters, Ron Hubb 1133, then transition to VM370 and mid-70s consolidated in Palo Alto. Enhancement for single-system image loosely-coupled operation with lots of enhancements (load balancing and fall-over across complex). SEQUOIA was large APL application that provided screen interface for sales&marketing (sort of super PROFS menu) that invoked all the other apps. Then there was a series of former branch office managers promoted to hdqtrs that included HONE and were horrified to find that HONE was VM370 based ... and believed their career would be made by mandating that HONE be migrated to MVS (believing all the IBM sales&market) ... assigned the whole staff to the effort ... after a year ... it was declared a success, person promoted (heads roll uphill) and things settled down to VM370 until repeated. Then in the first half of the 80s, somebody decided that HONE couldn't be converted to MVS was because HONE was running my enhanced VM370 systems. HONE was then directed to move to a vanilla, standard supported VM370 product ... because what would they do if I was hit by a bus (assuming that once HONE was running the standard product ... then it would be possible to convert it to MVS). I didn't have a lot to do with HONE after that.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

In the morph from CP67->VM370 they simplified and/or dropped a lot of CP67 features ... including multiprocessor support and lots of stuff I had done as undergraduate. I then spent some of 1974 migrated a lot of cp67 to VM370 before being able to start shipping (enhanced production) CSC/VM internally. Some old (CSC/VM) email:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

CSC/VM included SPM (superset of the later combinations of SMSG, VMCF, and IUCV), special message originally done by he Pisa Scientific Center for CP67 ... including catching anything that would otherwise be sent to terminal. CSC/VM also included autolog command that I originally did for automated benchmarking ... but was quickly (combined with SPM) for automated operator and automated (service) virtual machines (trivia: the internal RSCS/VNET network support, done by co-worker at CSC, shipped to customers in 1976 included SPM support, even tho VM370 never shipped SPM support)

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

CSC/VM (VM/370 Release 2 base) didn't start with SMP multiprocessor support ... I then added SMP multiprocessor support to CSC/VM Release 3 base ... originally for HONE ... so they could upgrade all their POK machines to multiprocessor (SMP multiprocessor support then ships in VM370 Release 4). After transferring to SJR, CSC/VM becomes SJR/VM.

During the first part of the 70s was the Future System effort that was completely different from 370 and was going to completely replace it (internal politics was killing off 370 efforts ... lack of new 370s stuff during and after FS is credited with giving clone system makers their market foothold). When FS imploded there was mad rush to reconstitute 370 efforts including kicking off the quick&dirty 3033&3081 efforts in parallel. The head of POK also managed to convince corporate to kill the VM370 product, transfer all the people to POK for MVS/XA (or supposedly MVS/XA would ship on time); Endicott managed to save the VM370 product mission, but had to reconstitute a VM370 development group from scratch. I have an email exchange from Ron Hubb about POK executive coming out to Palo Alto and telling HONE that VM370 would no longer be available on POK processors ... this caused such an uproar that he had to explain that HONE had misunderstood what he said (this was before the cycles with former branch managers believing they could have HONE moved to MVS).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

I had a series of (usenet) postings in 2009 about features just announced for z/VM that was running at HONE in 1979 ("From The Annals of Release No Software Before Its Time").
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time

trivia: CP67 autolog command & automated benchmarking ... was one of the first features moved to VM370 ... had scripts (originally CP67) for synthetic benchmarks for several kinds of workload profiles ... including various stress testing. Originally even moderate stress tests were guaranteed to crash VM370. So one of the next things to do was move CP67 kernel serialization mechanism to VM370 to clean up lots of VM370 problems including system failures and hung users (which required re-ipl to recover).

automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

Note 23Jun1969 unbundling announcement started charging for SE services, maint, software (although they made the case that kernel software should still be free). Part of SE training was sort of journeyman program as part of large SE group at customer site. After unbundling, they couldn't figure out how NOT to charge for trainee SEs at the customer. Thus was born HONE ... several CP67 HONE datacenters with branch office online access for SEs to practice running guest operating systems in virtual machines. The science center also ported apl\360 to cp67/cms for cms\apl, redid storage management from 16kbyte workspaces to large (demand page) virtual memory and added API for system services (like file i/o) enable lots of real world applications. HONE started using it for online sales&marketing support applications ... which came to dominate all HONE activity (and guest operating system use withered away).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

the mad rush to get stuff back into the 370 product pipeline contributed to decision to pickup a little of the stuff from my R2-based CSC/VM for shipping in release 3 vm/370 (like autolog command and misc. other VM370 & CMS changes). Then because of FS giving rise to the market foothold of the clone 370 makers, the decision was made to transition to charging for kernel software, starting with new kernel add-ons (that didn't directly involve hardware support) ... and my dynamic adaptive resource manager (that I originally did as undergraduate in the 60s) was selected as the guinea pig (and I had to spend some amount of time with business people and lawyers about kernel software charging) ... which was first released for VM370 R3PLC4. I included misc. other things in that release ... including the kernel reorgnization needed for multiprocessor support (but not the actual hardware support). Then came quandary for VM370 Release 4 ... wanting to (finally) ship multiprocessor support ... which had to be free (direct hardware support), but was dependent on the kernel reorg shipping in the charged-for resource manager (violating free software couldn't have a charged-for prereq). The resolution was to move about 90% of the (charged-for) resource manager code into the base release 4 ... w/o changing the monthly fee for the resource manager.

SMP multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

Eventually in the early 80s, transitioned from charged-for kernel software add-ons ... to the whole kernel (operating system) release was charged for.

... trivia: when facebook 1st moved to silicon valley, it was into a new bldg. built next door to the former consolidated US HONE datacenter.

--
virtualization experience starting Jan1968, online at home since Mar1970

CP67 and BPS Loader

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 and BPS Loader
Date: 16 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#25 CP67 and BPS Loader
recent BCS/IBU post
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)

Before I graduate, I had been hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities. I thot Renton was possibly largest datacenter in the world, couple hundred million in IBM 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in halls around machine room. There was disaster plan to replicate Renton up at the new 747 plant in everett; Mt. Rainier heats up and the resulting mud slide takes out Renton (analysis that it would cost Boeing more being w/o Renton during the recovery period than the cost to replicate Renton). Lots of politics between Renton director and CFO ... CFO just had a 360/30 in small machine room up at Boeing Field (for payroll) ... they do enlarge the machine room and install a 360/67 for me to play with (when I'm not doing other stuff). Trivia: 747-3 was flying skies of Seattle getting FAA flt. certification. When I graduate, I join IBM rather than staying at Boeing.

In the 80s, I'm introduced to John Boyd and sponsored his briefings at IBM (trivia: 89/90 the Commandant of Marine Corps leverages Boyd for make-over of the Corps ... at a time when IBM was also desperately in need of a make-over). Boyd would tell lots of stories, one was being vocal about the electronics across the trail wouldn't work. Possibly as punishment, he was put in command of spook base about the time I was at Boeing (he would claim that it had the largest air conditioned bldg in that part of the world). One of his biographies talk about spook base being a $2.5B "windfall" for IBM (ten times Renton), but it must have been a lot of stuff besides 360 computers. Details talk about spook base having two 360/65s (while renton had a whole sea of 360/65s). ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White

we continued to have Boyd meetings at Marine Corps Univ. in Quantico ...even after he passes in 1997.

Boyd URLs & posts
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 16 Jan 2022
Blog: Facebook
Before decision was made to make all 370s virtual memory, I got con'ed into helping 370/195 group looking at adding dual i-stream support (aka looked like multiprocessor from system standpoint). This references to the death of ACS360 (IBM execs afraid that it would advance state of the art too fast and they would loose control of the market)
https://people.cs.clemson.edu/~mark/acs_end.html
also has reference to dual i-stream patents (aka "Sidebar: Multithreading")

195 had 64instruction pipeline but no branch prediction (or similar) so conditional branches drained the pipeline ... otherwise the 195 was 10mip machine ... but lots of conventional code (with conditional branches draining pipeline) only ran at 5mips. An emulated two processor machine, each i-stream running at 5MIPs then would be capable of keeping the 10MIPs pipeline running at full speed.

They mentioned that the primary difference between 360/195 and 370/195 was instruction retry was added for 370 (something about 195 having so many circuits and probability that any circuit would have transient glitch). However, before it got very far along it was decided to have all 370s with virtual memory and it was decided that whole 195 might have to be redone to retrofit virtual memory

other trivia: one of the final nails in the FS coffin ... FS ref:
http://www.jfsowa.com/computer/memo125.htm
was that if there were 370/195 apps redone for FS running on an FS machine made from the fastest available technology, they would have throughput of 370/145 ... about a factor of 30 times slowdown.

more trivia: a decade ago, a customer asked me if I could track down the decision to make all 370s virtual memory. It turns out that MVT storage management was so bad that regions had to be specified four times larger than nominally used ... as a result a typical 370/165 with 1mbyte of memory would only support four concurrent regions ... not sufficient to keep 370/165 busy. Moving to 16mbyte virtual memory would allow increase in number of regions by a factor of four times with little or no paging ("solving" the MVT storage management problem and keeping 370/165 busy)

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP Multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Tracking down decision for adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

posts mentioning Boeing Huntsville modified MVT R13 to run in virtual memory mode on 360/67 (for similar reasons)
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2015c.html#47 The Stack Depth
https://www.garlic.com/~lynn/2014j.html#33 Univac 90 series info posted on bitsavers
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012k.html#55 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2007v.html#11 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007g.html#33 Wylbur and Paging
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006m.html#29 Mainframe Limericks
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001h.html#14 Installing Fortran

--
virtualization experience starting Jan1968, online at home since Mar1970

KPMG auditors forged documents to avoid criticism, tribunal heard

From: Lynn Wheeler <lynn@garlic.com>
Subject: KPMG auditors forged documents to avoid criticism, tribunal heard
Date: 16 Jan 2022
Blog: Facebook
KPMG auditors forged documents to avoid criticism, tribunal heard. Big Four accounting firm tries to distance itself from actions of former staff who were auditing Carillion
https://www.ft.com/content/69f69d0e-6055-4301-9900-a0f4b0db55df

After ENRON, rhetoric in congress was that Sarbanes-Oxley would prevent future ENRONs and guarantee that executives and auditors did jailtime ... however it required SEC do something. However, possibly because even GAO didn't believe SEC was doing anything, GAO started doing reports of fraudulent financial reporting, even showing reporting fraud increased after SOX went into effect (and nobody doing jailtime). There were even jokes, that SOX wouldn't actually improve things except increase auditing requirements ... which was a gift to the audit industry because politicians felt so badly that the ENRON fraud took down one of the major accounting firms.

ENRON scandal
https://en.wikipedia.org/wiki/Enron_scandal
SOX
https://en.wikipedia.org/wiki/Sarbanes%E2%80%93Oxley_Act

Economic Consequences of Auditor Reputation Loss: Evidence from the Audit Inspection Scandal
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3715005

ENRON Posts
https://www.garlic.com/~lynn/submisc.html#enron
SOX posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley

--
virtualization experience starting Jan1968, online at home since Mar1970

138/148

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 138/148
Date: 16 Jan 2022
Blog: Facebook
May1975, endicott cons me into helping with virgil/tully (i.e. 138/148) ... they want some unique features to help fight the clone 370 makers. I do analysis to identify 6kbytes of highest executed operating system paths for dropping into (6kbytes of) microcode (with 10:1 speedup). Old post with analysis used for selection
https://www.garlic.com/~lynn/94.html#21

then they try and get corporate approval to ship vm370 embedded in every machine made (sort of like current PRSM/LPAR) ... but with POK lobbying corporate hard to kill the vm370 product, shutdown the development group and transfer all the people to POK to work on MVS/XA (claiming that otherwise, MVS/XA wouldn't ship on time), corporate veto'ed the request. I did get con'ed into running around the world with the 138/148 business people presenting the 138/148 case to business people and product forecasters in US-regions and world-trade. One thing I learned from the experience was in world trade, country forecasts turned into firm orders to the manufacturing plants and countries had to "eat" bad forecast/orders (and forecasters could loose their job). In the US bad forecasts were "eaten" by the manufacturing plants ... and US forecasters tended to get promoted based on forecasting what-ever corporate told them was "strategic" (regardless of whether it made any business sense). As a result, manufacturing plants tended to redo their own forecasts for the US regions.

360/370 mcode posts
https://www.garlic.com/~lynn/submain.html#mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

1443 printer

From: Lynn Wheeler <lynn@garlic.com>
Subject: 1443 printer
Date: 17 Jan 2022
Blog: Facebook
1052-7 (selectric) used for system operator console supposedly had much higher duty cycle (amount of typing) than 2741. as workloads increased on univ os/360 (360/67 running as 360/65) ... univ got a 1443 printer used for full volume of console output ... with more important messages (also) printed on 1052-7.
https://en.wikipedia.org/wiki/IBM_1443

I had 2741 at home from Mar1970 until Jun1977 (replaced by CDI miniterm ... similar to TI Silent 700).

1052-7 and/or 1443 posts
https://www.garlic.com/~lynn/2018b.html#106 Why so many 6s and 8s in the 70s?
https://www.garlic.com/~lynn/2017.html#38 Paper tape (was Re: Hidden Figures)
https://www.garlic.com/~lynn/2011f.html#28 US military spending has increased 81% since 2001
https://www.garlic.com/~lynn/2010i.html#54 Favourite computer history books?
https://www.garlic.com/~lynn/2010i.html#32 Death by Powerpoint
https://www.garlic.com/~lynn/2010h.html#55 IBM 029 service manual
https://www.garlic.com/~lynn/2010h.html#20 How many mainframes are there?
https://www.garlic.com/~lynn/2007q.html#34 what does xp do when system is copying
https://www.garlic.com/~lynn/2006k.html#54 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2005s.html#21 MVCIN instruction
https://www.garlic.com/~lynn/2005i.html#5 Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY
https://www.garlic.com/~lynn/2004d.html#44 who were the original fortran installations?
https://www.garlic.com/~lynn/2004.html#42 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)

--
virtualization experience starting Jan1968, online at home since Mar1970

Error Handling

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Error Handling
Date: 17 Jan 2022
Blog: Facebook
trivia: end of semester after taking 2 semester hr intro fortran/computers, I was hired as student programmer to reimplement 1401 MPIO (tape<->unit record) on 360/30 .. given 360 princ-ops, assembler, bunch of hardware manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... within a few weeks had a 2000 card assembler program. the univ. shutdown the datacenter over the weekend and I had the place all to myself, although monday morning classes could be difficult after 48hrs w/o sleep. the univ. had been sold 360/67 (for tss/360) to replace 709/1401, the 360/30 temporarily replaced 1401 in transition to 360/67. TSS/360 never came to production fruition and so ran 360/67 as 360/65 with os/360. Within year of taking intro class, I was hired fulltime responsible for os/360.

Decade later, I transferred from IBM cambridge science center to IBM san jose research and got to wander around most of silicon valley (ibm and non-ibm). One of the places was bldg14 (disk engineering) and bldg15 (disk product test) across the street. At the time bldg14/15 were running stand-alone mainframe testing, prescheduled 7x24, around the clock. They said that they had recently tried MVS but it had 15min mean-time-between-failure (requiring manual re-ipl) in that environment. I offered to rewrite the I/O supervisor to make it bullet proof and never fail ... allowing any amount of ondemand, concurrent testing (greatly improving productivity). Downside was that they would start calling me whenever they had problem, so I had to increasingly play disk engineer.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

I wrote up research report on the work and happen to mention the MVS 15min MTBF ... which brought down the wrath of the MVS group on my head (including trying to have me separated from IBM company) ... so it didn't bother me when 3880/3380s were about to ship ... FE had regression test of 57 errors that would likely to happen ... MVS was failing in all 57 cases (requiring manual re-ipl) and in 2/3rds of the cases there was no indication of what caused the failure. old email:
https://www.garlic.com/~lynn/2007.html#email801015

--
virtualization experience starting Jan1968, online at home since Mar1970

Error Handling

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Error Handling
Date: 17 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#35 Error Handling

Early on, when REX was first created, I wanted to demonstrate that it (before renamed REXX and released to customers) wasn't just another pretty scripting language ... so I chose the large assembler IPCS dump analyzer to redo in REX ... objective was to take 3months elapsed working less than half time, resulting in ten times the function and running ten times faster (slight of hand to make interpreted REX implementation run faster than the assembler version). I finished early so developed library of automated scripts that would search for common failure signatures.

I had expected REX IPCS implementation, "DUMPRX" would be released to customers, in part since nearly every internal datacenter and IBM PSR made use of it. However for various reasons it wasn't ... but I did get IBM permissions to do user group presentations (at Baybunch and SHARE) on how I had done the implementation ... and within a few months, similar non-IBM versions began appearing.

Then in 1986, the 3090 service processor people wanted to include it in the 3092
https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

The 3092 service processor started out as highly modified VM370/CMS running on 4331 with CMS/IOS3270 implementing all the service screens, this was then updated to a pair of 4361s. Some old email:
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

disclaimer: After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters (including the internal world-wide, online sales&marketing support "HONE" systems were long-time customers).

trivia: possibly part of the reason IBM wouldn't release it was that it was the beginning of the OCO wars (stop shipping source code) and I had done a primitive disassembler ... point at a address (possibly just "symbolic" symbol) and it would provide symbolic instruction display of the area. Also could provide a DSECT library ... point at an address and it would format the storage according to a specified dsect. VMSHARE archives (TYMSHARE started offering their CMS-base computer conferencing free to SHARE in AUG1976 ... precursor to modern social media).
http://vm.marist.edu/~vmshare
search on "OCO wars"

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Error Handling

From: Lynn Wheeler <lynn@garlic.com>
Subject: Error Handling
Date: 17 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2022.html#36 Error Handling

In the morph from CP67->VM370 they simplified and/or dropped a lot of CP67 features ... including multiprocessor support and lots of stuff I had done as undergraduate. I then spent some of 1974 migrated a lot of cp67 to VM370 before being able to start shipping (enhanced production) CSC/VM internally. Some old (CSC/VM) email:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

part of first changes moved to VM370 was autolog command & other features originally done for CP67 automated benchmarks. It turns out that many of the stress test benchmarks were guaranteed to crash vm370 ... and next set of features moved to vm370 were the CP67 kernel serialization mechanisms that not only addressed system crashes but also "zombie/hung" users (that required re-ipl to clear).

cp67/vm370 automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS
Date: 18 Jan 2022
Blog: Facebook
some from Yelavich pages (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

Bob Yelavich also in mainframe hall of fame
https://www.enterprisesystemsmedia.com/mainframehalloffame
... I'm above him on list (alphabetic)

I took two semester hr intro to computers/fortran and within a year univ. hires me fulltime to be responsible for mainframe systems (360/67 running as 360/65 w/os360). Along the way, univ. library gets ONR (office naval research) grant to do online catalog ... part of the money goes for 2321/datacell. Online catalog also selected to be betatest for original CICS product ... and debugging CICS was added to my tasks (one of the library people was sent to a CICS class, but I had to do it w/o any class or source). First problem was CICS wouldn't come up & no indicative error code/messages ... took me a couple days (especially w/o source), CICS had some hard coded (undocumented) BDAM options and library had created the BDAM files with different set of options and open wasn't working. Had to patch the executable.

past CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

Mid-90s, we had left IBM but was called into NIH to talk about NLM and indexing of articles. There were two guys there that had done the original implementation in the late 60s (wrote their own monitor, not CICS) ... we got to shoot the bull about online library lookup and BDAM files (which they were still using nearly 30yrs later)

some past NIH/NLM posts
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2017g.html#57 Stopping the Internet of noise
https://www.garlic.com/~lynn/2015b.html#64 Do we really?
https://www.garlic.com/~lynn/2015b.html#63 Do we really?
https://www.garlic.com/~lynn/2009o.html#38 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2005j.html#47 Where should the type information be?
https://www.garlic.com/~lynn/2005j.html#45 Where should the type information be?
https://www.garlic.com/~lynn/2005.html#23 Network databases

recent posts mentioning taking two semester hr intro computers/fortran
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2022.html#5 360 IPL
https://www.garlic.com/~lynn/2022.html#1 LLMPS, MPIO, DEBE
https://www.garlic.com/~lynn/2021k.html#124 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#81 IBM Fridays
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021j.html#0 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#71 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021h.html#46 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#35 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021g.html#17 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021f.html#79 Where Would We Be Without the Paper Punch Card?
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#19 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#47 Recode 1401 MPIO for 360/30
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#38 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#19 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2021b.html#13 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#81 Keypunch
https://www.garlic.com/~lynn/2021.html#80 CICS
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#67 IBM Education Classes
https://www.garlic.com/~lynn/2021.html#61 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#37 Early mainframe security
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2020.html#30 Main memory as an I/O device
https://www.garlic.com/~lynn/2020.html#26 What's Fortran?!?!

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe I/O

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe I/O
Date: 19 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#17 Mainframe I/O

Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of the 70s:

and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

... FS was completely different from 370 and was going to completely replace 370 ... internal politics was also shutting down 370 efforts ... and the lack of new 370 stuff is credited with giving clone makers their market foothold ... also IBM sales had to resort to enormous amount of FUD to try and make up for the lack of new stuff. When FS finally implodes, there is mad rush to get stuff back into product pipelines ... kicking off the quick&dirty 3033&3081 efforts in parallel. periodically referenced FS info:
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Mythical Man Month

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mythical Man Month
Date: 19 Jan 2022
Blog: Facebook
The Mythical Man Month by Frederick Brooks of S/360 fame: "One good OS programmer is worth 100 mediocre ones"

Originally (virtual memory) 360/67 was for TSS/360 ... but most of them were running CP67/CMS instead. Before TSS/360 was "decommitted", TSS/360 had 1200 people at the time the science center CP67/CMS group had 12 people.

We would joke that the IBM mainstream projects cultivated the success of failure culture because the resulting problems resulted in throwing more people and money at them, increasing the size of the organizations ... and because the organization size was larger the executives got larger compensation. The counter is if you are able to anticipate problems and correct for them ... bureaucracies will start to believe they aren't difficult tasks and fail to appreciate all the extra effort.

In the late 70s was drawing comparison between hudson valley and "black holes" ... that they could get so large that nothing would ever be able to escape/ship. The analogy was missing something until I ran across a paper about "black holes" could evaporate.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failuree

some past posts referencing "black holes" and "bureaucracies"
https://www.garlic.com/~lynn/2021.html#64 SCIENTIST DISCOVERS NEW ELEMENT - ADMINISTRATIUM
https://www.garlic.com/~lynn/2017h.html#83 Bureaucracy
https://www.garlic.com/~lynn/2017g.html#33 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2013n.html#11 50th anniversary S/360 coming up
https://www.garlic.com/~lynn/2011o.html#7 John R. Opel, RIP
https://www.garlic.com/~lynn/2011j.html#14 Innovation and iconoclasm
https://www.garlic.com/~lynn/2004o.html#53 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004b.html#29 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/99.html#162 What is "Firmware"

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 19 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#31 370/195

I would claim that FS picked up single-level-store from TSS/360 implementation ... and would also claim that when I did a page-mapped filesystem for CMS, that I learned what not to do from TSS/360 (and part of the reason that I continued to work on 370 all through FS ... and periodically ridicule what they were doing ... which wasn't exactly career enhancing activity ... but that is true of lots of stuff I would do).

CMS paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

Part of S/38 was all disks were part of single filesystem (and files could be scatter allocated across multiple disks). As a result, system was down for full system back up ... and any disk failure required disk replacement but then a full filesystem restore. No difference for single disk filesystem ... but as number of disks for a system increased, it quickly became disastrous.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

At univ. in the 60s, playing with CP67/CMS on 360/67 on weekends, did synthetic edit/compile/execute benchmark with the TSS/360 IBM SE. Early on, running with little or none of my (later) enhancements for CP67, emulated 35 CMS users and had better throughput and response time than TSS/360 on the same hardware with four emulated users.

... when I was playing disk engineer in bldg14&15 would periodically run into Ken
https://en.wikipedia.org/wiki/RAID

In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4.[5]

... snip ...

playing disk enginneer posts
https://www.garlic.com/~lynn/subtopic.html#disk

S/38 was early adopter later in the 80s. Late 80s/early 90s and doing HA/CMP (before leaving IBM), did several "no single point of failure" audits ... and would periodically find little overlooked things:

In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5.[7]

... snip ...

... previously posted from/about crypto museum
https://en.wikipedia.org/wiki/National_Cryptologic_Museum

IBM 7955 Tractor
https://en.wikipedia.org/wiki/File:HARVEST-tape.jpg
IBM 7550 Harvest
https://en.wikipedia.org/wiki/IBM_7950_Harvest

The TRACTOR tape system, part of the HARVEST system, was unique for its time. It included six tape drives, which handled 1.75-inch-wide (44 mm) tape in cartridges, along with a library mechanism that could fetch a cartridge from a library, mount it on a drive, and return it to the library. The transfer rates and library mechanism were balanced in performance such that the system could read two streams of data from tape, and write a third, for the entire capacity of the library, without any time wasted for tape handling.

... also

The Harvest-RYE system became an influential example for computer security; a 1972 review identified NSA's RYE as one of two "examples of early attempts at achieving 'multi-level' security."[5]

... snip ...

... one of the times I visited the museum they were playing video about multi-level security ... I told them I wanted a copy of the tape to do a voice-over parody ... and somebody got me a copy.

misc. recent posts mentioning s/38
https://www.garlic.com/~lynn/2021k.html#132 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#45 Transaction Memory
https://www.garlic.com/~lynn/2021k.html#43 Transaction Memory
https://www.garlic.com/~lynn/2021j.html#49 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#99 Why the IBM PC Used an Intel 8088
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#47 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#89 Silicon Valley
https://www.garlic.com/~lynn/2021c.html#16 IBM Wild Ducks
https://www.garlic.com/~lynn/2021b.html#68 IBM S/38
https://www.garlic.com/~lynn/2021b.html#49 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#48 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#7 IBM & Apple
https://www.garlic.com/~lynn/2021.html#50 does anyone recall any details about MVS/XA?
https://www.garlic.com/~lynn/2019d.html#10 IBM Midrange today?

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 19 Jan 2022
Blog: Facebook
I had developed automated bencharking system that could run arbitrary number of simulated users with lots of different workload profiles. It was part of one of my hobbies, providing enhanced production (CP67) operating systems for internal datacenters. In the morph from CP67->VM370, they dropped and/or simplified a lot of CP67 features (including hardware multiprocessor support and bunch of stuff I did as undergraduate in the 60s). In 1974, I started moving a lot of stuff to VM370 ... first was the support for automated benchmarking ... unfortunately many of the benchmarks were guaranteed to crash VM370 ... so the next was moving the CP67 kernel serialization mechanism and other functions that were necessary to stop the system from constantly crashing (also cleared up the hung/zombie user problems, which otherwise required a re-ipl to clear) ... eventually getting it up to production level for CSC/VM ... some old email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

trivia: 23Jun1969 unbundling announcement that started to charge for se services, maint, (application) software (but made case that kernel software should still be free). then with Future System and lack of newer 370s, it gave clone processor makers their market foothold ... after FS implodes, the decision was made to start transitioning to charging for kernel software (as one of the countermeasures to clone processors) ... first for kernel add-ons ... until most everything was replaced by early 80s and charging for all kernel releases. Some of my CP67 stuff for dynamic adaptive resource management that I had originally done as undergraduate in the 60s, and now part of my production CSC/VM (distribution for internal datacenters) was selected for the initial guinea pig (and I got to spend time with lawyers and business people on kernel charging policies). SHARE had resolutions about adding the CP67 "wheeler scheduler" from the time VM370 appeared.

dynamic adaptive resource management & scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

The science center had a significant amount of data on internal and customer system configuration and workload profiles ... for my kernel addon final product release, defined an "envelope space" of number of users, kinds of users, kinds of workloads, I/O intensive, CPU intensive, paging intensive, etc (that represented all the known installation workload characteristics) ... and then defined 1000 benchmarks that were evenly distributed through the "envelope space". We also had an APL analytical model that would predict the result and then compare it with the actual results. After the first 1000 benchmarks the analytical model was used to define the next 1000 benchmark characteristics (attempting to find anomalous combinations that weren't covered by the first 1000 benchmarks). The 2000 benchmarks for initial product release took three months elapsed time.

Automated Benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
23jun1969 unbundling announced posts
https://www.garlic.com/~lynn/submain.html#unbundle

later after transferring to San Jose Research and got to wander around lots of silicon valley .... including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were running prescheduled, stand-alone mainframe testing 7x24 around the clock ... had mentioned they had tried running with MVS ... but it had a 15min mean-time-between-failure in that environment (requiring manual re-ipl). I offered to rewrite I/O supervisor to make it bullet proof and never fail ... enabling any amount of ondemand, concurrent testing ... greatly improving productivity. I later wrote (internal) research report about what I needed to do and happened to mention the MVS 15min MTBF ... which brings down the wrath of the MVS organization on my head (informally was told they tried to have me separated from IBM company, when that didn't work, they tried other unpleasant things).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 19 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking

more 23Jun1969 unbundling trivia ... started charging for SE services, part of SE training was sort of journeyman program as part of large SE group at customer site. After unbundling, they couldn't figure out how NOT to charge for trainee SEs at the customer. Thus was born HONE ... several CP67 HONE datacenters with branch office online access for SEs to practice running guest operating systems in virtual machines. The science center also ported apl\360 to cp67/cms for cms\apl, redid storage management from 16kbyte workspaces to large (demand page) virtual memory and added API for system services (like file i/o) enable lots of real world applications. HONE started using it for online sales&marketing support applications ... which came to dominate all HONE activity (and guest operating system use withered away).

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
Cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

In the mid-70s, HONE clones were starting to appear around the world and the US HONE datacenters were consolidated in Palo Alto ... when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next door to the old consolidated US HONE datacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 19 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking

This was disk engineering and product test labs ... which had disk being tested that included all sorts of errors ... including errors that violated channel architecture (trivial example: control unit busy, then control unit end .. and never actually clear, solid loop unless you put in code to recognize was in loop and then reset subchannel ... then had software trick to cause controller to re-impl)

Note a couple years later when they were getting ready to release 3880/3380s, FE had regression test of 57 errors that would likely to happen ... in all 57 cases, MVS would fail, requiring manual re-ipl ... and in 2/3rds of the cases, there was no indication of what caused the failure (since MVS had brought down their wrath on my head, I wasn't exactly sorry).

Date: 10/15/80 13:29:38
From: wheeler

fyi; ref: I/O Reliability Enhancement; After running under VM for almost two years in the engineering labs, the 3380 hardware engineers recently did some live MVS testing.

They have a regression bucket of 57 hardware errors (hardware problems that are likely to occur & the FE must diagnose from the SCP error information provided).

It turns out that for 100% of the hardware errors, the MVS system hangs & must be re-IPL'ed. Also in 66% of the cases there is no indication of what the problem was that forced the re-IPL


... snip ... top of post, old email index

... after one scenario where I explained that they were doing something (on purpose) that violated channel architecture ... they argued ... and finally had conference call with the POK channel engineers where POK channel engineers explained that they were violating channel architecture. After that they wanted me on call for channel architecture issues ... just part of them trying to make me increasingly spend my time playing disk engineer.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 19 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking

... from long ago and far away

Date: 01/26/81 09:51:07
From: wheeler
To: <distribution>

re: vmdvbsy memo; xxxxxx has some nos. from benchmark done recently in POK where TPNS should about 25-30% less terminal interactions than MONITOR showed Q1 interactions. TPNS showed longer average Q1 response times than the monitor data. The attention read sequence could easily account for the difference. Furthermore since the time between the attention interrupt and the termination of the terminal read is on the order of milliseconds, the Q1 interactive data is significantly skewed.


... snip ... top of post, old email index

the vmdvbsy memo was about a "bug" from the original cp67->vm370 morph that I hadn't earlier fixed in vm370. It really shows up in the early 80s ... where there was a VM370 fix to improve the throughput of ACP/TPF on multiprocesssor machine (ACP/TPF didn't have hardware multiprocessor support and "new" 308Xs were originally only going to be multiprocessor ... and there was concern that the whole airline business would move to latest Amdahl single processor machine ... which already had approx. same throughput as two processor 3081) ... but degraded throughput for nearly every other vm370 multiprocessor customer ... so part of trying to help mask that degraded throughput they did something to tweak 3270 terminal response (which also had indirect affect of masking the bug from original cp67->vm370 morph). It was high lighted by a very large gov. (virtual machine) customer dated back to the CP67 days ... which was a 1200-baud ASCII terminal installation (no 3270s) ... so the 3270 terminal tweak (to mask multiprocessor degradation) had no effect at this customer.

The original cp67->vm370 bug involved how "long wait" from queue was decided (as part of dropping from queue) ... in cp67, it was based on real device type ... in vm370, it was based on virtual device type ... it wasn't a problem as long as the virtual and real types were the same ... but along came 3270s where the virtual type was 3215 and the real type was 3270 ... and there was a whole bunch queue drops which shouldn't be occurring.

turns out I didn't know about the customer until after joining IBM ... back in the 60s, in retrospect some of the CP67 changes IBM would suggest I should do (as undergraduate) possibly originated from them. Then after joining IBM, I was asked to teach computer&security classes at the agency (I've told story that during a break, that one of their people bragged that they knew where I was every day of my life back to birth, I guess they justified it because they ran so much of my software ... and it was before the "Church" commission) ... I never had clearance and never served/worked for the gov ... but they would sometimes treat me as if I did (this was about time IBM got new CSO, had come from gov. service, at one time head of presidential detail and IBM asked me to run around with him talking about computer security).

some old email references
https://www.garlic.com/~lynn/2007.html#email801006b
https://www.garlic.com/~lynn/2007.html#email801008b
https://www.garlic.com/~lynn/2001f.html#email830420
https://www.garlic.com/~lynn/2006y.html#email860121

automated benchmarking
https://www.garlic.com/~lynn/submain.html#benchmark

some recent posts mentioning acp/tpf (& 3083)
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#78 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 20 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking

apl analytical model trivia: this had been earlier been made available on HONE as the Performance Predictor ... SEs could enter configuration and workload profiles and asked "what-if" questions regarding changes (to configuration and/or workloads). A modified version was also used by the US consolidate HONE datacenter to (login) load balancing across the single-system-image, loosely-couple (shared disk) operation.

Use for automated benchmarking ... predicting results and then comparison with data after the run ... was used to confirm both the analytical model as well as my dynamic adaptive resource management & scheduling (for 1000 first benchmarks selected for wide variety of configurations and workloads) ... and then a version used to choose the next 1000 benchmarks searching for possibly anomalous configurations and/or workloads.

automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
Hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

some more recent performance predictor posts
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2019c.html#85 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019b.html#27 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2017j.html#109 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017j.html#103 why VM, was thrashing
https://www.garlic.com/~lynn/2017h.html#68 Pareto efficiency
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017b.html#27 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2016c.html#5 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#109 Bimodal Distribution
https://www.garlic.com/~lynn/2016b.html#54 CMS\APL
https://www.garlic.com/~lynn/2016b.html#36 Ransomware

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Conduct

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Conduct
Date: 20 Jan 2022
Blog: Facebook
... things downhill started well before Gerstner

I spent most of my time at IBM being told I didn't have a career, promotions, and/or raises (apparently for periodically offending IBM executives) ... so when I was asked to interview for assistant to president of one of the 370 clone vendors (front for clone maker on the other side of pacific), I said "why not". During the interview they make a veiled reference to 811 documents (registered ibm confidential 370/xa documents, name for their Nov1978 publication date, I did have a whole drawer full ... double locks, surprise audits, etc) ... so I make a reference to recently submitting an update for the IBM Business Conduct Guidelines (required reading every year by employees), because I didn't think the ethical standards were strong enough. However, that wasn't the end of it. A couple years later, the federal gov. is suing the (parent) company for industrial espionage. Because I was on the visitor list, I get a 3hr interview with an FBI agent. I tell him my story and suggest somebody in plant site security may have been feeding names to the recruiter (since plant security has list of everybody that has registered confidential documents ... for the surprise audits).

After joining IBM, I still got to attend user group members and drop by customers ... The director of one of the largest financial datacenters on the east coast like me to stop in and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl machine (up until then had been selling into technical/scientific, univ. markets but had yet to break into true-blue commercial market and this would be the first, lonely Amdahl in vast sea of blue). I was then asked to go spend a year onsite at the customer (to help obfuscate why an Amdahl machine was being ordered), I talk it over with the customer and they said they would like to have me onsite, it wouldn't make any difference about the order, and so I told IBM no. I was then told that the branch manager was good sailing buddy of the IBM CEO, and if I refused, I could forget about having an IBM career, promotions, raises. Not long later, I transfer to IBM San Jose Research on the opposite coast (got to wander around most of silicon valley, ibm datacenters, customers, other computer makers)

... late70s/early80s, I was blamed for online computer conferencing on the internal network ... it really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem ... something like 300 participated but claims that possibly 25,000 were reading it. Six copies of some 300 pages were printed, along with Executive Summary and Summary of Summary ... packaged in Tandem 3-ring binders and set to the executive committee, small piece Summary of Summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... took another decade (1981-1992) ... IBM had gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in Gerstner as a new CEO and reverses the breakup). Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Career

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Career
Date: 21 Jan 2022
Blog: Facebook
At end of semester taking two (semester) hr intro to fortran/computers, was hired as student programmer ... reimplement 1401 MPIO on 360/30 ... given lots of manuals and got to design&implement my own monitor, interrupt handlers, device drivers, error handling, storage management, etc. Univ. would shutdown datacenter over the weekend and I would have the whole place to myself ... although 48hrs w/o sleep would make monday morning classes hard. Then within yr of taking intro class, was hired fulltime responsible for os/360 (360/30 replaced 1401 on transition replacing 709/1401 with 360/67 for tss/360, tss/360 never came to production fruition so ran as 360/65 with os/360). Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit to better monetize the investment. I thot the Renton datacenter possibly largest datacenter in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. When I graduate, I join IBM (instead of staying at Boeing)

some recent posts mentioning BCS:
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#70 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s

In the late70s/early80s I was blamed for online computer conferencing (precursor to modern social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... when the corporate executive committee was told about it, 5of6 wanted to fire me.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

At IBM I was periodically told I had no career, promotions, raises ... so when I was asked to interview for assistant to president of one of the 370 clone vendors (sales & marketing front for clone maker on the other side of pacific), I said "why not". During the interview they make a veiled reference to 811 documents (registered ibm confidential 370/xa documents, name for their Nov1978 publication date, I did have a whole drawer full ... double locks, surprise audits, etc) ... so I make a reference to recently submitting an update for the IBM Business Conduct Guidelines (required reading every year by employees), because I didn't think the ethical standards were strong enough. However, that wasn't the end of it. A couple years later, the federal gov. is suing the (parent) company for industrial espionage. Because I was on a visitors list, I get a 3hr interview with an FBI agent. I tell him my story and suggest somebody in plant site security may have been feeding names to the recruiter (since plant security has list of everybody that has registered confidential documents ... for the surprise audits).

some past posts about industrial espionage case
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2017f.html#35 Hitachi to Deliver New Mainframe Based on IBM z Systems in Japan
https://www.garlic.com/~lynn/2017e.html#63 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2014f.html#50 Beyond the EC12
https://www.garlic.com/~lynn/2013.html#61 Was MVS/SE designed to confound Amdahl?
https://www.garlic.com/~lynn/2011c.html#67 IBM Future System
https://www.garlic.com/~lynn/2010h.html#3 Far and near pointers on the 80286 and later

In early 80s, I was (also) introduced to John Boyd and would sponsor his briefings at IBM. He had lot of stories, one was being very vocal that the electronics across the trail wouldn't work ... so possibly as punishment, he is put in command of "spook base" (about the same time I'm at Boeing). One of Boyd's biographies claim "spook base" was $2.5B windfall for IBM (ten times renton, although descriptions say it only had two 360/65s, while Renton had whole sea of them). ref gone 404, but lives on at wayback machine:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White

Commandant of Marine Corps leverages Boyd in 89/90 for a corps make-over ... when IBM was also desperately in need of make-over. Boyd passes in 1997, but it is the Marines at Arlington, USAF had pretty much disowned Boyd. In the 50s as instructor at Nellis, he was considered possibly best fighter pilot in the world ... then re-did the original F15 design (cutting weight in half), behind the F16 & F18 and helped with the A10 design. Chuck's tribute
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
The tribute displays in lobby of Quantico library for various Marines, includes one for (USAF) Boyd and we've continued to have Boyd conferences at Marine Corps University in Quantico.

Surprised when USAF dedicated "Boyd Hall" at Nellis (possibly because he was no longer around to harass them):

There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question.

... snip ...

Boyd posts & URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Acoustic Coupler

From: Lynn Wheeler <lynn@garlic.com>
Subject: Acoustic Coupler
Date: 22 Jan 2022
Blog: Facebook
acoustic (aka "audio") coupler box/lid to reduce external noise interference. note three people from CSC came out to univ, last week of jan1968 to install cp67 ... which had automagic terminal recognition and would use controller SAD ccw to switch terminal port scanner type between 1052 & 2741. The univ. got some number of ASCII/TTY terminal ... so I wrote ascii terminal support and extended the automagic terminal recognition to handle 1052, 2741, & TTY. trivia: when the box arrived for the IBM engineers to add ASCII/TTY port scanner to the 360 terminal controller ... it was prominently labeled "HEATHKIT".

I then wanted to do a single dialup number (hunt group)
https://en.wikipedia.org/wiki/Line_hunting
for all dialup terminals ... however, didn't quite work. While I could reset the correct port scanner type for each line, IBM took a shortcut and hardwired each port line speed ... 1052&2741 where 134.5 baud, ascii/tty was 110 baud ... and didn't quite work. Thus was born a univ. project to do our own clone controller, started with building a channel interface board for Interdata/3 programmed to emulate IBM controller with the addition could dynamically recognize terminal line speed. This was later updated with an Interdata/4 for the channel interface and cluster of Interdata/3s to handle port interfaces. Interdata (and later Perkin/Elmer) sell it commercially as IBM clone controller. Four of us at the univ. get written up responsible for (some part of the) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

I didn't get dialup 2741 at home until after I graduate and join IBM ... had that 2741 from March1970 until June1977 when it was replaced with (ascii) 300 baud CDI miniterm (similar to TI silent700).

When I got 300 baud cdi miniterm at home, I also got an IBM tieline at home ... had tieline from cdi mini term, then 1200 baud ibm 3101 glass teletype then got ibm 2400 baud encrypting modem card for my home/personal ibm/pc

trivia: From the law of unintended consequences, major motivation for the (failed) future system project was countermeasure to clone controllers (make interface so complex that competitors could keep up), but because internal politics were shutting down 370 efforts, the lack of new 370 products during and after FS, is credited with giving the clone 370 processor makers their market foothold .... aka failed countermeasure for clone controllers (too complex even for IBM), responsible for clone processors.

360 clone controllers (plug compatible maker) posts
https://www.garlic.com/~lynn/submain.html#360pcm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

CSC had 360/67 ... back when IBM leased/rented machines and even internal accounts were charged (funny) money. Datacenters then had to recover charges by billing their users (again funny money). Some time after joining IBM I was told that I was using more computer time than the whole rest of the organization and could I do something about it, I said I could work less, it was never mentioned again.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Fiction is a Luddite Literature

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Fiction is a Luddite Literature
Date: 22 Jan 2022
Blog: Facebook
Cory Doctorow: Science Fiction is a Luddite Literature
https://locusmag.com/2022/01/cory-doctorow-science-fiction-is-a-luddite-literature/

1980s, corporations with large staffs started reorganizing subsidiaries, moving profit into subsidiary with very low staffs (in part as union negotiating tactic). Readily seen with commercial airlines, profit was moved into computerized ticket subsidiary ... in the early 90s, fuel charges were driving airline operations to break even or into the red, while the computerized ticket subsidiary was making significant profit ... more than offset the "booked" loses in operations ... and that parent companies were still clearing significant profit ... other industries had their own variation on the tactic.

Around the turn of the century, this was extended to having the profit making subsidiary incorporate in offshore tax haven. The poster child was large construction equipment maker incorporated distributor subsidiary in offshore tax haven. The company had been making equipment in the US, selling to US customers, and shipping directly to the US customer. After incorporating the distributor subsidiary in an offshore tax shelter, the manufacturing plant would "sell" the equipment to the distributor at cost, and the distributor would then sell it to US customers. Nothing changed about making in the US, selling to US customers, shipping directly to US customers, and the products never left US shores, and money never actually left US shores, but the profit was being "booked" in offshore tax shelter.

tax evasion, tax fraud, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

"financialization" posts
https://www.garlic.com/~lynn/2021k.html#11 General Electric Breaks Up
https://www.garlic.com/~lynn/2021g.html#64 Private Equity Now Buying Up Primary Care Practices
https://www.garlic.com/~lynn/2021g.html#63 Rising Rents Threaten to Prop Up Inflation
https://www.garlic.com/~lynn/2021e.html#81 How capitalism is reshaping cities
https://www.garlic.com/~lynn/2021.html#23 Best of Mankiw: Errors and Tangles in the World's Best-Selling Economics Textbooks
https://www.garlic.com/~lynn/2021.html#21 ESG Drives a Stake Through Friedman's Legacy
https://www.garlic.com/~lynn/2021.html#18 Trickle Down Economics Started it All
https://www.garlic.com/~lynn/2020.html#25 Huawei 5G networks
https://www.garlic.com/~lynn/2020.html#15 The Other 1 Percent": Morgan Stanley Spots A Market Ratio That Is "Unprecedented Even During The Tech Bubble"
https://www.garlic.com/~lynn/2020.html#2 Office jobs eroding
https://www.garlic.com/~lynn/2019e.html#99 Is America ready to tackle economic inequality?
https://www.garlic.com/~lynn/2019e.html#31 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019d.html#100 Destruction of Middle Class
https://www.garlic.com/~lynn/2019d.html#88 CEO compensation has grown 940% since 1978
https://www.garlic.com/~lynn/2019c.html#73 Wage Stagnation
https://www.garlic.com/~lynn/2019c.html#68 Wage Stagnation
https://www.garlic.com/~lynn/2019.html#0 How Harvard Business School Has Reshaped American Capitalism
https://www.garlic.com/~lynn/2018f.html#117 What Minimum-Wage Foes Got Wrong About Seattle
https://www.garlic.com/~lynn/2018f.html#108 Share Buybacks and the Contradictions of "Shareholder Capitalism"
https://www.garlic.com/~lynn/2018f.html#107 Politicians have caused a pay 'collapse' for the bottom 90 percent of workers, researchers say
https://www.garlic.com/~lynn/2018b.html#18 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018b.html#10 Xerox company sold
https://www.garlic.com/~lynn/2018b.html#7 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018b.html#5 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018.html#104 Tax Cut for Stock Buybacks
https://www.garlic.com/~lynn/2018.html#52 How a Misfit Group of Computer Geeks and English Majors Transformed Wall Street
https://www.garlic.com/~lynn/2017i.html#67 Allied Radio catalog 1956
https://www.garlic.com/~lynn/2017i.html#60 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017i.html#8 The Real Reason Wages Have Stagnated: Our Economy Is Optimized For Financialization
https://www.garlic.com/~lynn/2017i.html#7 The Real Reason Wages Have Stagnated: Our Economy Is Optimized For Financialization
https://www.garlic.com/~lynn/2017i.html#1 Any definitive reference for why the PDP-11 was little-endian?
https://www.garlic.com/~lynn/2017h.html#116 The Real Reason Wages Have Stagnated: Our Economy Is Optimized For Financialization
https://www.garlic.com/~lynn/2014h.html#3 The Decline and Fall of IBM
https://www.garlic.com/~lynn/2014g.html#111 The Decline and Fall of IBM
https://www.garlic.com/~lynn/2014g.html#94 Why Financialization Has Run Amok
https://www.garlic.com/~lynn/2014c.html#24 IBM sells Intel server business, company is doomed

--
virtualization experience starting Jan1968, online at home since Mar1970

Haiti, Smedley Butler, and the Rise of American Empire

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Haiti, Smedley Butler, and the Rise of American Empire
Date: 22 Jan 2022
Blog: Facebook
Haiti, Smedley Butler, and the Rise of American Empire. A new book sheds light on the life of the "Maverick Marine" who spearheaded U.S. interventions from Asia to Latin America.
https://theintercept.com/2022/01/22/deconstructed-haiti-smedley-butler-marine-book/

"I was a racketeer; a gangster for capitalism." So declared famed Marine Corps officer Smedley Butler in 1935, at the end of a long career spent blazing a path for American interests in Cuba, Nicaragua, China, the Philippines, Panama, and Haiti. In a new book on Butler's career, "Gangsters of Capitalism," Jonathan Katz details Butler's life and explains how it dovetails with the broader story of American empire at the turn of the century.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some recent posts mentioning Smedley
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021i.html#54 The Kill Chain
https://www.garlic.com/~lynn/2021i.html#37 9/11 and the Saudi Connection. Mounting evidence supports allegations that Saudi Arabia helped fund the 9/11 attacks
https://www.garlic.com/~lynn/2021i.html#33 Afghanistan's Corruption Was Made in America
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#96 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#38 $10,000 Invested in Defense Stocks When Afghanistan War Began Now Worth Almost $100,000
https://www.garlic.com/~lynn/2021g.html#67 Does America Like Losing Wars?
https://www.garlic.com/~lynn/2021g.html#50 Who Authorized America's Wars? And Why They Never End
https://www.garlic.com/~lynn/2021g.html#22 What America Didn't Understand About Its Longest War
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021f.html#21 A People's Guide to the War Industry
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#32 Fascism

--
virtualization experience starting Jan1968, online at home since Mar1970

Acoustic Coupler

From: Lynn Wheeler <lynn@garlic.com>
Subject: Acoustic Coupler
Date: 22 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#49

The science center ported apl\360 to CP67/CMS for CMS\APL ... redoing storage management used for 16kbyte (swapped) workspaces for large virtual memory (demand-paged) workspaces and also did API for using systems services, like file I/O (enabling real-word apps). One of the early remote (dialup) CMS\APL users were Armonk business planners. They sent tapes to cambridge of the most valuable IBM business information (detailed customer profiles, purchases, etc) and implemented APL business modeling using the data. In cambridge, we had to demonstrate extremely strong security ... in part because various professors, staff, and students from Boston/Cambridge area universities were also using the CSC system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Also with the 23jun1969 unbundling announcement, the company started charging for SE services, (application) software (made the case that kernel software should still be free), maint, etc. SE training had included sort of journeyman program with SE trainees part of SE group at customer site. After unbundling, company couldn't figure out how *NOT* to charge for those trainee SEs at customers. As a result, HONE was "born" in the US ... IBM CP67 datacenters for SEs in branch offices to dialin (2741) and practice with guest operating systems running in virtual machines. DPD also started using CMS\APL to deploy APL-based sales&marketing support applications on CP67 (dailin 2741 with APL-ball) ... which eventually came to dominate all HONE activity (and the original purpose for SE guest operating system practice whithered away).

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: The US HONE datacenters were consolidated in silicon valley in the mid-70s. When facebook first moves into silicon valley, it was into a new bldg built next door to the (former) consolidated US HONE datacenter.

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 22 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking

Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of the 70s:

and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

... as I've periodically quoted before ... during Future System, I continued to work on 360&370 all during FS period and would periodically ridicule what they were doing, which wasn't a career enhancing activity ... "vigorous debate" (and "wild ducks") no longer tolerated ... and IBM on its way to way going into the red and being reorganized into the "13 baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in Gerstner as a new CEO and reverses the breakup).

Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Benchmarking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Benchmarking
Date: 23 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#53 Automated Benchmarking

In the 70s&80s, mainframe hardware was major part of the revenue ... that took a major hit in the late 80s and by 1992, IBM had gone into the red and was being reorganized into the 13 "baby blues" (board eventually brought in a new CEO that reversed the breakup). By turn of the century, mainframe hardware sales were something like 5% of revenue. In the EC12 timeframe there was analysis that mainframe hardware sales was 4% of revenue but the mainframe organization was 25% of IBM revenue (software & services) and 40% of profit (huge profit margin from software&services motivation to keep the mainframe market going).

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

IBM has already been selling off much of its mainframe hardware business, disks are gone, chips for the mainframe are gone, IBM logo on mainframe processor is perceived to maintain the mainframe software&services profit.

trivia: AMEX & KKR were in competition for LBO/private-equity takeover of RJR and KKR wins. KKR runs into trouble with RJR and hires away the AMEX president to help. In 1992 when IBM had gone into the red and was reorging in preparation for breakup of company ... AMEX spins off much of its dataprocessing & outsourcing business as "First Data" (in the largest IPO up until that time), part of it was many of the payment card outsourcing customer banks felt that they were in competition with AMEX card (and outsourcing to AMEX company put them at disadvantage).

We had left IBM and were brought in as consultants to small client/server startup to help with what is frequently now called "electronic commerce". For having done electronic commerce, was then involved in lot of financial industry and financial standards bodies ... and also did a lot of work for FDC. Many of the FDC executives had previously reported to the new IBM CEO. I did a lot of mainframe work for FDC in various of the datacenters ... just one of FDC's datacenter (over 40 max-configured IBM mainframes, none older than 18months, constant rolling updates) by itself represented a significant percent of IBM hardware revenue 1st part of this century ... and I viewed that IBM new CEO thoroughly understood how dependent the financial industry was on IBM mainframes.

more trivia: 15yrs after FDC was the largest IPO (up until that time), KKR does a LBO/private-equity takeover of FDC in the largest LBO up until that time ... and the former AMEX president & IBM CEO had moved on to be CEO of major KKR competitor.

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

In the late 70s & early 80s, I was blamed for online computer conferencing on the internal network ... folklore is that when corporate executive committee were told, 5of6 wanted to fire me. One of the things discussed was the funding research&development is carried as an expense, reducing company/stock value, buying something is carried as an asset.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

This century, that was carried into stock buybacks ... which increase stock value ... both improving executive bonuses ... Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.

... snip ...

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

Precursor to current virtual machines and containers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Precursor to current virtual machines and containers
Date: 23 Jan 2022
Blog: Facebook
One of the early uses for IBM internal network was distributed devlopment project between Cambridge Scientic Center and Endicott ... to implement CP67 option for unannounced 370 "virtual memory" virtual machines (rather than just 360 virtual memory). One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... CSC ran "CP67L" on real 360/67. There were "H" updates to add 370 virtual machines option ... and CP67H would run in 360/67 virtual machine under CP67L (in part because the CSC also had non-IBM users, staff, professors, and students from boston/cambridge univ, which would make sure none exposed to any unannounced 370 "virtual memory" details). There were "I" updates where CP67 ran with 370 virtual memory architecture (rather than 360/67 virtual memory) ... CP67I ran in a 370 virtual machine under CP67H which ran in a 360/67 virtual machine under CP67L (running on real 360/67). This was in regular operation a year before the first engineering 370 with operational virtual memory ... a 370/145 ... in fact CP67I was used for initial test of the engineering 370/145. Three people came out from San Jose and added 3330 and 2305 device support to CP67I resulting in "CP67SJ" ... which ran for a long time on internal 370s ... even after VM/370 became available.

In the morph of CP67->VM370, they dropped and/or simplified a lot of CP67 (including hardware multiprocessor support and lots of stuff I had done as undergraduate). In 1974, I started migrating things from CP67->VM370 ... initially the automated benchmarking done for CP67 ... however, the benchmarks were regularly guaranteed to crash VM370 ... so some of the next features migrated was the CP67 kernel serialization mechanisms ... to eliminate the crashes as well was as zombie/hung users (which required re-IPL to clear). Old email about bringing enhanced VM370 up to production quality for internal distributed CSC/VM
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

May1975, Endicott cons me into helping with project to microcode enhancement for 138/148 ... there was 6kbytes of microcode space where instructions from VM370 kernel code could be moved (approx. byte-for-byte, i.e. 6kbytes of 370 instructions) ... initial analysis
https://www.garlic.com/~lynn/94.html#21
6k 370 kernel instructions accounted for 79.55% kernel execution ... moved to microcode with 10:1 speedup. Note: Endicott tried to ship VM370 preinstalled on every 138/148 (follow-on to 135/145 and precursor to 4331/4341) installed ... but POK was in the process of convincing corporate to kill vm370 product and move all the people to POK to work on MVS/XA (endicott did manage to acquire the vm370 product mission, but had to reconstitute a development group from scratch) ... so corporate vetoed.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
microcode posts
https://www.garlic.com/~lynn/submain.html#mcode

Early 80s, I got approval to give talks on how "ECPS" was done at user group meetings, including the monthly BAYBUNCH meetings hosted by SLAC. After meetings. I got heavily grilling by the Amdahl folks which were working on "MACROCODE" "hypervisor" support (done all in MACROCODE, not needing VM370). This was well before 3090 announce ... which wasn't able to respond with PRSM/LPAR until 1988.
https://en.wikipedia.org/wiki/PR/SM

trivia: 3 people came out from CSC to install CP67 at the univ ... and I was allowed to play with it on the weekends. within a few months I had rewritten a lot of code and presented results at SHARE user group meeting, about running OS/360 in CP67 virtual machine. OS jobstream on bare machine 323secs, originally running in CP67 856secs, CP67 overhead 533 CPU secs. After some CP67 pathlength rewrite OS/360 runtime: 435 secs, CP67 overhead 112 CPU secs ... reducing CP67 CPU overhead from 533 to 112 CPU secs, reduction of 421 CPU secs.

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Architecture Redbook

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Architecture Redbook
Date: 23 Jan 2022
Blog: Facebook
370 115/125 also violated 370 architecture redbook (for red cover of 3ring binder), POO was printed from subset of the redbook using a CMS SCRIPT command line option ... the full redbook had lots of engineering and instruction justification, alternative considerations, etc notes.

All the 360 instructions required that both start and end of storage location addresses were prechecked for valid access. 370 introduced incrementally executed instructions ... MVCL & CLCL ... where each byte address was incrementally check. 115/125 shipped MVCL&CLCL with the 360 rules. There were various code that put in max 16mbyte length and was suppose to incrementally execute until end of storage ... but 115/125 would check for ending address and not execute.

trivia: XA/370 architecture had lots of added features that were justified for specific MVS operational characteristics. 4300s had its own added "E" architecture added features specifically tailored for DOS&VS1 ... resulting in DOS/VSE

some past 370 architecture redbook posts
https://www.garlic.com/~lynn/2014.html#17 Literate JCL?
https://www.garlic.com/~lynn/2013c.html#37 PDP-10 byte instructions, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#20 New HD
https://www.garlic.com/~lynn/2012l.html#24 "execs" or "scripts"
https://www.garlic.com/~lynn/2012l.html#23 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012j.html#82 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012e.html#59 Word Length
https://www.garlic.com/~lynn/2012.html#64 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#89 Is there an SPF setting to turn CAPS ON like keyboard key?
https://www.garlic.com/~lynn/2011g.html#38 IBM Assembler manuals
https://www.garlic.com/~lynn/2011e.html#86 The first personal computer (PC)
https://www.garlic.com/~lynn/2010k.html#41 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2008d.html#67 Throwaway cores
https://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier
https://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2005n.html#48 Good System Architecture Sites?
https://www.garlic.com/~lynn/2005k.html#1 More on garbage
https://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005i.html#40 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2004k.html#45 August 23, 1957
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Security
Date: 24 Jan 2022
Blog: Facebook
Turns out I didn't know about gov. customer until after joining IBM ... however back in the 60s, in retrospect some of the CP67 changes IBM would suggest I should do (as undergraduate) possibly originated from them. Then after joining IBM, I was asked to teach computer&security classes at the agency (I've told story that during one break, one of their people bragged that they knew where I was every day of my life back to birth and challenged me to give any date, I guess they justified it because they ran so much of my software ... and it was before the "Church" commission) ... I never had clearance and never served/worked for the gov ... but they would sometimes treat me as if I did. The customer became very active at SHARE which assigned 3letter installation codes, instead of selecting their agency letters, they selected "CAD" ... supposedly for cloak-and-danger. They were also fairly active on VMSHARE ... can periodically find their ("CAD") posts in the VMSHARE archives (Tymshare had started offering their CMS-based computer conferencing facility "free" to SHARE in Aug1976)
http://vm.marist.edu/~vmshare

The IBM science center was on the 4th flr, the IBM boston programming center was on the 3rd flr along with what was listed as lawyer offices in the lobby directory. However, the 3rd flr telco closet was on the IBM side, and the telco panels were clearly labeled "IBM" and the <3letter gov. agency>.

Not long later, IBM got new CSO, had come from gov. service, at one time head of presidential detail and IBM asked me to run around with him talking about computer security (while a little bit of physical security wore off).

The science center had ported apl\360 to CP67/CMS for CMS\APL ... redoing storage management used from 16kbyte (swapped) workspaces to large virtual memory (demand-paged) workspaces and also did API for using systems services, like file I/O (enabling real-word apps). One of the early remote (dialup) CMS\APL users were Armonk business planners. They sent data to cambridge of the most valuable IBM business information (detailed customer profiles, purchases, etc) and implemented APL business modeling using the data. In cambridge, we had to demonstrate extremely strong security ... in part because various professors, staff, and students from Boston/Cambridge area universities were also using the CSC system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

After transferring to SJR on the west coast ... got to wander around most IBM and non-IBM locations ... including TYMSHARE, on one visit, they demo'ed a game called "ADVENTURE" that they had found on Stanford SAIL PDP10 system, copied to their PDP10 and then ported to VM370/CMS. I got a copy and made it available inside of IBM and started a "demo package" repository at SJR. We then had audit by corporate security ... and got into big battle when they directed that the "demo package" repository (games by any other time) had to be removed. It turns out most company login 3270 screens had "For Business Purposes Only" ... SJR screens had "For Management Approved Purposes Only". We had also placed 6670s in departmental areas around the building with colored paper in the alternate paper drawer for the separator page ... which was mostly blank so we modified the 6670 driver to select random quotations for printing on the separator page, one of which was:

[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


... the corporate auditors in their offshift search for unsecured classified material, happened to find a 6670 document with the above on a separator page ... and tried to make a big issue of it with executives ... claiming we had done it on purpose to ridicule them.

In the 80s, I had HSDT project with T1 and faster speed links. Corporate had requirement that all links leaving corporate location had to be encrypted ... and I hated what I had to pay for T1 link encryptors and faster encryptors were really hard to find. As a result I became involved in doing link encryptors that would handle at least 3mbyte/sec (i.e. mbyte not mbit) and cost no more than $100 to make. At first the corporate crypto group claimed it significantly compromised the DES standard. It took me three months to figure out to explain to them, rather than significantly weaker than DES, it was actually much stronger (than DES standard). It was a hollow victory. They said that there was only one organization in the world that was allowed to use such crypto ... I could make as many as I wanted ... but they had to all be sent somewhere. It was when I realized that there are three kinds of crypto: 1) the kind they don't care about, 2) the kind you can't do, 3) the kind you can only do for them (who ever they are).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

some more recent posts mentioning 3kinds of crypto:
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#8 IBM Travel
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2018d.html#33 Online History
https://www.garlic.com/~lynn/2018.html#10 Landline telephone service Disappearing in 20 States
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
https://www.garlic.com/~lynn/2016h.html#0 Snowden
https://www.garlic.com/~lynn/2016f.html#106 How to Win the Cyberwar Against Russia
https://www.garlic.com/~lynn/2016e.html#31 How the internet was invented
https://www.garlic.com/~lynn/2016.html#101 Internal Network, NSFNET, Internet
https://www.garlic.com/~lynn/2015h.html#3 PROFS & GML
https://www.garlic.com/~lynn/2015f.html#39 GM to offer teen driver tracking to parents
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Security
Date: 24 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#57 Computer Security

Early 70s, there was a leak of classified document to an industry publication, describing the unannounced 370 virtual memory. One of the outcomes of the resulting investigation was that all IBM internal copying machines were retrofitted with a identification number that would appear on every paged copied (to narrow source of any future leaks). During Future System, a countermeasure for leaked documents, they made then softcopy on specially secured VM370 systems (could only be used from specially crippled CMS ids which could only view the documents, no printing, etc from designated 3270s).

During 1974, I was in the process of migrating lots of CP67 features/fixes/code to VM370 (that had been dropped in the morph of CP67->VM370) and had weekend test time in a datacenter with one of these specially secured (FS document) VM370 systems. I had gone Friday afternoon to make sure everything was prepared for me being there that weekend. They started hassling me that I wouldn't able to penetrate the specially secured VM370 system, even left alone in the machine room all weekend. It got so irritating that I eventually said, 5mins, asking them to first disable all access other than in the machine room. From the front console, I then patched a byte in core ... that in effect crippled security. I then commented, the use of the front console would have to be changed to only allowing authorized use (in order to cripple such an attack ... should also consider encrypting the files).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts/comment in recent thread about migrating CP67 code/function/fixes to VM370
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#43 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#53 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking

Scanned PDF file, example of the copying machine countermeasure (Gray's 1984 Fault Tolerance overview, "IBM SJ - 086" at bottom of each page):
https://www.garlic.com/~lynn/grayft84.pdf
also
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

370 virtual memory trivia, decade ago, a customer asked me if I could track down IBM's decision to make all 370s virtual memory ... I eventually found somebody involved ... basically MVT's storage management was so bad, that regions had to be defined four times larger than normally used, so a typical 1mbyte 370/165 only supported four concurrently executing regions ... utilization not enough to justify the machine. Going to large virtual address space, would allow the number of regions to be increased by a factor of four times with little or no paging. old post with pieces of their answer
https://www.garlic.com/~lynn/2011d.html#73

also mentions Ludlow working on VS2 prototype offshift on 360/67 (I would sometime run into him working offshift on trips to POK). there actually wasn't a lot of code to get MVT up and running in virtual memory. The biggest amount of code was in SVC0/EXCP, now all the passed channel programs had virtual addresses, and channels required real addresses ... SVC0/EXCP had to make a copy of each passed channel program replacing the virtual addresses with real addresses ... he borrowed CCWTRANS from CP67 (that implemented the same function for virtual machines) for the SVC0/EXCP code.

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Architecture Redbook

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Architecture Redbook
Date: 24 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#56 370 Architecture Redbook

4300s were Endicott machines low&mid range ... with 370 & "E-architecture" support. Large customers were then ordering hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). POK saw all the numbers and wanted to play in that market. Several problems ... 1) Endicott never intended to implement 370/xa on the 4341, 2) all the CKD disks were datacenter ... only mid-range, non-datacenter disks were FBA & MVS didn't have any FBA support (currently no CKD disks have been made for decades, being forced to simulate CKD on industry standard fixed-block disks), 3) and customers were deploying tens of vm/4341s per support person, while MVS required tens of support people per system.

So for POK to play in that market: 1) Endicott had to really work to get 370/xa implemented on 4341, 2) come out with 3375 CKD disks simulated on 3370 FBA, 3) reduce number of MVS support people by factor of 100, from tens of people per system to tens of systems per person.

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

some recent posts mentioning departmental computers and/or distributed computing tsunami
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2020.html#38 Early mainframe security
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018f.html#93 ACS360 and FS
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#92 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2018.html#24 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017h.html#78 IBM Mag TAPE Selectric ad 1966
https://www.garlic.com/~lynn/2017h.html#26 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017c.html#94 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2017b.html#36 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 24 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2022.html#41 370/195

Some discussion of m91, m95, m195
http://www.quadibloc.com/comp/pan05.htm
360/195 console picture (along with discussion of 90, 91, 92, 95 and 195)
http://www.righto.com/2019/04/iconic-consoles-of-ibm-system360.html
370/195 console picture
https://www.computerhistory.org/collections/catalog/102682589

The 370/195 group had con'ed me into help with hyperthreading ... simulating two processor multiprocessor, see the "Sidebar: Multithreading" here
https://people.cs.clemson.edu/~mark/acs_end.html

They said the primary difference between 360/195 and 370/195 was instruction retry was added for 370 (recover from intermittent errors).

The 370/195 pipeline fed execution units capable of 10MIPS ... but didn't have branch prediction or speculative execution ... so conditional branches drained the pipeline and as a result many codes ran at 5MIPS. The assumption was that two threads running at 5MIPS will manage to keep the execution units operating at 10MIPS. However, once the decision was made to make all 370s virtual memory ... any new work for 370/195 got dropped (adding virtual memory to 195 would have been a large, major effort)

multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few other 370/195 posts over the years
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
https://www.garlic.com/~lynn/2006n.html#60 How Many 360/195s and 370/195s were shipped?
https://www.garlic.com/~lynn/94.html#39 IBM 370/195
https://www.garlic.com/~lynn/94.html#38 IBM 370/195

--
virtualization experience starting Jan1968, online at home since Mar1970

File Backup

From: Lynn Wheeler <lynn@garlic.com>
Subject: File Backup
Date: 25 Jan 2022
Blog: Facebook
In the 60s started keeping archives of files ... fortunately tape capacity kept increasing so in the mid-80s, I had 20 years of archived files ... but to be on the safe side, I had the tape replicated with three copies .... but unfortunately all in the same tape library ... and Almaden Research went through a period where operations were mounting random tapes as scratch ... I lost a dozen tapes ... including all three copies of my 20yr archive. It was when I started keeping archive copies at multiple locations (including PC disks at home).

A few weeks before the "problem" a customer at Princeton asked me if I had a copy of the original cms multi-level source update implementation (originally done in execs creating temp file from update and then applying subsequent updates creating series of temp updated source files) and managed to pull it off archive tape (before they all vanished) and send them off.

Note I did do CMSBACK in the late 70s for internal datacenters (decade later, released to customer with workstation and PC clients as WSDF (workstation datasave facility), then picked up by GPD/ADSTAR and became ADSM (then renamed TSM).
https://www.ibm.com/support/pages/ibm-introduces-ibm-spectrum-protect-plus-vmware-and-hyper-v

one of my biggest internal clients was consolidated IBM US HONE datacenter up in Palo Alto

I had done a special modification of CMS TAPE/VMFPLC to cut down on the interrecord gaps (appended FST to 1st file datablock instead of a separate record, and allow larger maximum record size for large files) ... also included special processing for my paged-mapped CMS filesystem to force I/O buffers to 4k page-aligned records.

Unfortunately when I lost my archive files ... they hadn't been backed up by CMSBACK ... only things on disk.

backup posts
https://www.garlic.com/~lynn/submain.html#backup
some old cmsback email
https://www.garlic.com/~lynn/lhwemail.html#cmsback
page-mapped cms filesystem
https://www.garlic.com/~lynn/submain.html#mmap
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

now I have multiple rotating USB terabyte drives with annual checkpoints for archive.

--
virtualization experience starting Jan1968, online at home since Mar1970

File Backup

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: File Backup
Date: 25 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#63 File Backup

some old exchange with princeton (melinda) ... and topic drift about HSDT for NSF supercomputer center interconnects and large processor cluster project

some old email exchange w/Melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908
other email with Melinda
https://www.garlic.com/~lynn/2007b.html#email860111
email with Melinda when we still thot "HSDT" would be interconnecting the NSF supercomputer centers
https://www.garlic.com/~lynn/2006t.html#email860407

... related from ACIS
https://www.garlic.com/~lynn/2011c.html#email851001
and regarding presentation to NSF director, Univ. Cal, NCAR, others
https://www.garlic.com/~lynn/2011c.html#email850425
more about HSDT/NSF
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2011c.html#email850426

above also mentions my project to do large processor clusters ... which eventually turns into HA/CMP.

Old post with NSF Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

and before RFP release ... and IBM internal politics not allowing us to bid, The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87)

backup posts
https://www.garlic.com/~lynn/submain.html#backup
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

Calma, 3277GA, 2250-4

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Calma, 3277GA, 2250-4
Date: 25 Jan 2022
Blog: Facebook
IBM had 2250 in the 60s, then 3277ga (tektronics tube wired into side of 3277) and then logo'ed sanders for 3250 in the 80s. IBM Los Gatos also had room of GE Calma (which I believe had NSC A400s)
https://en.wikipedia.org/wiki/Calma

2250-1 had a controller that interfaced to IBM channel. 2250-4 had a 1130 (2250-4 w/1130 was about same price as 2250-1) as a "controller" (at science center, sombody had ported spacewars from PDP1 to their 2250-4)

some past post mentioning Calma
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing
https://www.garlic.com/~lynn/2010c.html#91 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2009.html#37 Graphics on a Text-Only Display
https://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2006q.html#16 what's the difference between LF(Line Fee) and NL (New line) ?
https://www.garlic.com/~lynn/2005u.html#6 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005r.html#24 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)

other posts mentioning 2250-4 and/or 3277ga
https://www.garlic.com/~lynn/2021k.html#47 IBM CSC, CMS\APL, IBM 2250, IBM 3277GA
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2017g.html#62 Play the Pentagon-Funded Video Game That Predates Pong
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2014j.html#103 ? How programs in c language drew graphics directly to screen in old days without X or Framebuffer?
https://www.garlic.com/~lynn/2014g.html#77 Spacewar Oral History Research Project
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012l.html#77 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2012f.html#6 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2011o.html#21 The "IBM Displays" Memory Lane (Was: TSO SCREENSIZE)
https://www.garlic.com/~lynn/2011l.html#24 computer bootlaces
https://www.garlic.com/~lynn/2011j.html#4 Announcement of the disk drive (1956)
https://www.garlic.com/~lynn/2010l.html#12 Idiotic programming style edicts
https://www.garlic.com/~lynn/2009q.html#52 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2008q.html#41 TOPS-10
https://www.garlic.com/~lynn/2008o.html#77 PDP-1 Spacewar! program internals
https://www.garlic.com/~lynn/2008h.html#69 New test attempt
https://www.garlic.com/~lynn/2007r.html#8 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007f.html#70 Is computer history taught now?
https://www.garlic.com/~lynn/2007.html#14 vm/sp1
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006n.html#24 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006e.html#28 MCTS
https://www.garlic.com/~lynn/2006e.html#9 terminals was: Caller ID "spoofing"
https://www.garlic.com/~lynn/2005f.html#56 1401-S, 1470 "last gasp" computers?
https://www.garlic.com/~lynn/2004m.html#8 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004l.html#32 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#27 Shipwrecks
https://www.garlic.com/~lynn/2003f.html#39 1130 Games WAS Re: Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?
https://www.garlic.com/~lynn/2002p.html#29 Vector display systems
https://www.garlic.com/~lynn/2001i.html#51 DARPA was: Short Watson Biography
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 24 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2022.html#60 370/195

SJR had 370/195 running MVT for most of the 70s ... but (for some jobs) the turn around could be as much as 3months. The person doing "air bearing" simulation for (initially) 3370 "floating heads" design ... was getting a week or two turn around (even with high priority designation).

I had done rewrite of I/O supervisor so bldg14 (disk engineering) and bldg15 (product test) could any amount of concurrent ondemand mainframe testing (i periodically mentioned that they had tried MVS ... but it had 15min MTBF in that environment, requiring re-ipl). bldg15 tended to get very early engineering processors for disk I/O testing ... it got something like #3 or #4 engineering 3033. Testing was only taking percent or two of 3033, so we found a 3830 controller and couple strings of 3330 drives ... and put up our own private online service. Being good guys, we got the "air bearing" simulation setup on the 3033 ... and even tho 3033 was 4.5MIPS (compared to 370/195 10MIPS), they could still get multiple turn arounds a day (compared to couple turn around a month on 370/195).

disk read/write head
https://en.wikipedia.org/wiki/Disk_read-and-write_head
thin-film (originally 3370 floating) heads
https://en.wikipedia.org/wiki/Disk_read-and-write_head#Thin-film_heads

posts getting to play disk enginner in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some more posts specifically mentioning "air bearing" simulation
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#97 This chemist is reimagining the discovery of materials using AI and automation
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#71 Software as a Replacement of Hardware
https://www.garlic.com/~lynn/2016f.html#39 what is 3380 E?
https://www.garlic.com/~lynn/2016c.html#3 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015b.html#61 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2014l.html#78 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2012o.html#70 bubble memory
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format

--
virtualization experience starting Jan1968, online at home since Mar1970

CMSBACK

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CMSBACK
Date: 26 Jan 2022
Blog: Facebook
I did CMSBACK in the late 70s for internal datacenters, decade later, workstation and PC clients added and released to customers as WSDF (workstation datasave facility), then picked up by GPD/ADSTAR and became ADSM (then renamed TSM).
https://www.ibm.com/support/pages/ibm-introduces-ibm-spectrum-protect-plus-vmware-and-hyper-v

one of my biggest internal clients was online, consolidated IBM US HONE datacenter up in Palo Alto (all the branch offices in the US plus some number of other organizations with access).

I had done a special modification of CMS TAPE/VMFPLC to cut down on the tape interrecord gaps (appended FST to 1st file datablock instead of a separate record, and allow larger maximum record size for large files) ... also included special processing for my paged-mapped CMS filesystem to force I/O buffers to 4k page-aligned.

backup posts
https://www.garlic.com/~lynn/submain.html#backup
cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

Unfortunately when I lost a dozen tapes in the Almaden tape library (they were having operational problems where random tapes were being mounted as scratch, including three replicated archive tapes with stuff back to the 60s) ... files hadn't been backed up by CMSBACK ... only things on disk

some old exchange with princeton (melinda) requesting some archive, shortly before the Almaden problem ... also including topic drift about HSDT for NSF supercomputer center interconnects and large processor cluster project

some old email exchange w/Melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908
other email with Melinda
https://www.garlic.com/~lynn/2007b.html#email860111
email with Melinda when we still thot "HSDT" would be interconnecting the NSF supercomputer centers
https://www.garlic.com/~lynn/2006t.html#email860407

... related from ACIS
https://www.garlic.com/~lynn/2011c.html#email851001
and regarding presentation to NSF director, Univ. Cal, NCAR, others
https://www.garlic.com/~lynn/2011c.html#email850425
more about HSDT/NSF
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2011c.html#email850426

above also mentions my project to do large processor clusters ... which eventually turns into HA/CMP.

Old post with NSF Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

then RFP release ... and IBM internal politics not allowing us to bid, The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87)

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Melinda's VM History & other documents
http://www.leeandmelindavarian.com/Melinda#VMHist

there is also the VMSHARE archives (Tymshare started offering its CMS-based online computer conferencing system free to SHARE in Aug1976)
http://vm.marist.edu/~vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT, EARN, BITNET, Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT, EARN, BITNET, Internet
Date: 27 Jan 2022
Blog: Facebook
... coworker at the IBM cambridge science center in the 70s was responsible for the technology used for the internal network (larger than arpapnet/internet from just about the beginning until sometime mid/late 80s) ... and was also used for the corporate sponsored university BITNET (which also larger than arpanet/internet for a time). we both transfer out to san jose research in 1977.

Trivia: My wife and I were doing HA/CMP (started out HA/6000 for the nytimes to move their newspaper system ATEX from vax/cluster to ibm, but i changed the name to HA/CMP when started doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors) and had CLaM under contract for some of the work. CSC had moved from 545tech sq to 101 Main street and when IBM shutdown all the science centers, CLaM took over the space at 101.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

In the late 70s & early 80s, I was blamed for online computer conferencing on the internal network, folklore is that when the corporate executive committee was told about it, 5of6 wanted to fire me. One of the people from IBM France had been on sabbatical at CSC in the early 70s ... and we stayed in touch ... even did student (offspring) exchange one summer. He transfers from La Guade to Paris to get EARN setup ... old archived email (previously posted to a.f.c.)
https://www.garlic.com/~lynn/2001h.html#email840320
another about presenting HSDT to EARN board (and our offspring exchange)
https://www.garlic.com/~lynn/2006w.html#email850607

bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some old email exchange w/Melinda (at Princeton)
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908
other email with Melinda
https://www.garlic.com/~lynn/2007b.html#email860111
email with Melinda when we still thot "HSDT" would be interconnecting the NSF supercomputer centers
https://www.garlic.com/~lynn/2006t.html#email860407

... related from IBM ACIS
https://www.garlic.com/~lynn/2011c.html#email851001
and regarding presentations to NSF director, Univ. Cal, NCAR, others
https://www.garlic.com/~lynn/2011c.html#email850425
more about HSDT/NSF
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2011c.html#email850426

above also mentions my project to do large processor clusters ... which eventually turns into HA/CMP.

My HSDT (t1 and faster computer links) was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, some other things happen and finally the preliminary announcement (and later an RFP). Old post with NSF preliminary announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

then RFP released ... and IBM internal politics not allowing us to bid, The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87).

as regional networks connects into NSFnet, it evolves into NSFNET backbone, precursor to modern internet. NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

Melinda's VM History & other documents
http://www.leeandmelindavarian.com/Melinda#VMHist

there is also the VMSHARE archives (Tymshare started offering its CMS-based online computer conferencing system free to SHARE in Aug1976)
http://vm.marist.edu/~vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT, EARN, BITNET, Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT, EARN, BITNET, Internet
Date: 27 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet

Other trivia ... making HSDT presentations at Berkeley in early 80s ... got ask to help with Berkeley 10M. They were also working on transitioning from film to CCD and wanted high-speed for remote viewing. They were doing some testing at Lick (east of San Jose). Later they get $80m from keck foundation and it becomes keck 10m/observatory ... some posts with old archived email
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2004h.html#email860519

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

other past posts mentioning keck foundation and/or berkeley 10m
https://www.garlic.com/~lynn/2022.html#0 Internet
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#88 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019c.html#50 Hawaii governor gives go ahead to build giant telescope on sacred Native volcano
https://www.garlic.com/~lynn/2019.html#47 Astronomy topic drift
https://www.garlic.com/~lynn/2019.html#33 Cluster Systems
https://www.garlic.com/~lynn/2018f.html#71 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#76 George Lucas reveals his plan for Star Wars 7 through 9--and it was awful
https://www.garlic.com/~lynn/2018c.html#89 Earth's atmosphere just crossed another troubling climate change threshold
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2016f.html#71 Under Hawaii's Starriest Skies, a Fight Over Sacred Ground
https://www.garlic.com/~lynn/2015.html#19 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2014g.html#50 Revamped PDP-11 in Honolulu or maybe Santa Fe
https://www.garlic.com/~lynn/2014.html#76 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#8 We're About to Lose Net Neutrality -- And the Internet as We Know It
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek
https://www.garlic.com/~lynn/2012k.html#10 Slackware
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2009m.html#85 ATMs by the Numbers
https://www.garlic.com/~lynn/2009m.html#82 ATMs by the Numbers
https://www.garlic.com/~lynn/2008f.html#80 A Super-Efficient Light Bulb
https://www.garlic.com/~lynn/2007t.html#30 What do YOU call the # sign?
https://www.garlic.com/~lynn/2005l.html#9 Jack Kilby dead

--
virtualization experience starting Jan1968, online at home since Mar1970

Financialization of Housing in Europe Is Intensifying

From: Lynn Wheeler <lynn@garlic.com>
Subject: Financialization of Housing in Europe Is Intensifying
Date: 28 Jan 2022
Blog: Facebook
Financialization of Housing in Europe Is Intensifying, New Report Warns. Since the Global Financial Crisis residential property in European cities has become an attractive asset class for financial institutions, many in the U.S. The virus crisis has merely intensified this trend.
https://www.nakedcapitalism.com/2022/01/many-parts-of-eu-are-in-the-grip-of-an-insidious-housing-boom-new-report-warns.html

Blackstone owns at least 2,300 rental homes in Catalonia, according to the Tenants' Union. After "difficult" negotiations with the company over affordable rents for tenants and preventing evictions, the union's representatives say the fund has decided not to renew rental contracts unless the law forces it to. This decision could lead to hundreds or even thousands of "invisible evictions" -- i.e., tenants having to abandon apartments they have been living in for years because they are unable to renew their contracts.

... snip ...

How capitalism is reshaping cities (literally). Real estate investing has changed the look of buildings, cities, and the world. Author Matthew Soules explains how.
https://www.fastcompany.com/90637385/how-capitalism-is-reshaping-cities-literally

The very, very short explanation of the cause and effect of the 2008 financial crisis can be summarized in two words: real estate. Risky mortgages traded like stocks were blown into a bubble that popped, ravaging the finances and savings of people all around the world. In the aftermath, institutional investors bought up swathes of foreclosed properties, and pushed the financialization of housing into hyper-speed.

... snip ...

In Jan1999 I was asked to help try and stop the economic mess (we failed). I was told that some investment bankers had walked away "clean" from the S&L Crisis ... were then running Internet IPO Mills (invest a few million, hype, IPO for a few billion, needed to fail to leave the field open for the next round of IPOs), and were predicted next to get into securitized loans&mortgages ... 2001-2008 sold more than $27T into the bond market).

A decade later, Jan2009 I'm asked to HTML'ize the Pecora Hearings (30s congressional hearings into the '29 crash, resulted in jail time and Glass-Steagall) with lots of internal HREFs and URLs between what happened this time and what happened then (comments that the new congress might have an appetite to do something about it). I work on it for awhile and then get a call that it won't be needed after all (references to enormous mountains of wallstreet cash totally burying capital hill).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
Pecora Hearings &/or Glass-Steagall posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Bus&Tag Channels

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Bus&Tag Channels
Date: 28 Jan 2022
Blog: Facebook
360/370 bus&tag channels was combination of 1.5mbyte data rate and 200ft latency ... aka ... 2305-2 1.5mbyte/sec weren't working at 200ft latency (2305-1 ran 3mbyte/sec with pair of read/write heads and special 2byte channel). Part of it was channel performed end-to-end handshake for every byte transferred. Then "data streaming" channels did multiple bytes transfer for every end-to-end handshake, increasing to 3mbyte/sec transfer and 400ft channel distrance (later to 4.5mbytes/sec transfer).

In 1980, STL (since renamed SVL) was bursting at the seams and were moving 300 people from the IMS group to offsite bldg. They had tried "remote 3270" and found the human factors unacceptable. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff).

In 1988, I get asked to help LLNL (national lab) get some serial stuff they are playing with, standardized, which quickly becomes fibre channel standard (including some stuff I had done in 1980), Finally the POK group get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete. ESCON 17mbyte/sec, fibre channel standard started out 1gbit/sec, full-duplex, 2gbit/sec (200mbyte/sec) aggregate. Later some POK engineers become involved with fibre channel standard and define a heavy weight protocol that radically reduces the standard throughput, which eventually is released as FICON.

The most recent published "peak I/O" benchmark I can find is for max configured z196 getting 2M IOPS with 104 FICON (running over 104 FCS) ... using emulated CKD disks on industry standard fixed-block disks (no real CKD disks made for decades). About the same time there is a FCS announced for E5-2600 blades (standard in cloud megadatacenters at the time) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS) using industry standard disks.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON possts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 28 Jan 2022
Blog: Facebook
A decade ago, a customer asked me if I could track down the decision to make all 370s virtual memory. It turns out that MVT storage management was so bad that regions had to be specified four times larger than nominally used ... as a result a typical 370/165 with 1mbyte of memory would only support four concurrent regions ... not sufficient to keep 370/165 busy. Moving to 16mbyte virtual memory would allow increase in number of regions by a factor of four times with little or no paging (helping solve the MVT storage management problems and multitasking sufficient to keep 370/165 busy). Old archived afc post from decade ago:
https://www.garlic.com/~lynn/2011d.html#73

above mentions Ludlow doing the initial prototype offshift on 360/67. The page/swap tables & virtual memory didn't directly involve much code ... most of the code was hacking channel program fixes in SVC0/EXCP ... channel programs passed from applications now had virtual memory addresses and channels required "real" addresses ... needing to make a copy of the passed channel program and replacing virtual addresses with real. Implementation initially involved crafted CP67 (precursor to VM370) "CCWTRANS" (that performed similar function for virtual machines), into SVC0/EXCP.

Initially VS2/SVS was very similar to running MVT in a CP67 16mbyte virtual machine ... then transition to VS2/MVS that gave every application their own 16mbyte virtual address space (almost). OS360 paradigm was heavily pointer-passing API (so the passed parameters have to be in the same address space). To fix this, an 8mbyte kernel image is mapped into every application address space (leaving 8mbytes for the application, aka an application SVC would invoke the kernel image running in the application address space). Then came the problem of applications calls to sub-systems, which had also been moved into their own address space. To address this, the "common segment" was created, a 1mbyte area mapped into the same address in all virtual address spaces for parameters. Applications would allocation parameter area in the "common segment" and pass that address to the called sub-system ... which could retrieve the parameters from the same address in their image of the "common segment". However, the need for parameter space is somewhat proportional to the number of concurrently executing applications plus the number of sub-systems ... and quickly exceeded 1mbye and the "common segment" quickly morphs into "common system area" (or CSA). By 3033 time-frame customer CSA were 5 and 6 mbytes (leaving 2-3mbytes for an application, aka 16mbyte total, minus 8mbyte kernel image, minus 6mbyte CSA) and threatening to become 8mbytes ... leaving no room at all for applications.

This was major motivation for 370/xa "access registers", enabling sub-systems to directly address application parameter list in the applications (private) virtual memory (rather than in CSA). However, as mentioned, the need for CSA for parameter space was getting so severe by 3033 time, a subset of "access registers" was retrofitted to 3033 as "dual-address space" mode. Trivia: person responsible left IBM in the early 80s and later was major architect of Intel's "Itanium"
https://en.wikipedia.org/wiki/Itanium

165 was cache machine with 2mic main memory and 370 microcode that avg 2.1machine cycles per 370 instruction. 168 was similar but main memory was 4-5 times faster and 370 microcode was optimized and had an avg of 1.6machine cycles per 370 instruction. 3033 was quick&dirty effort (after Future System imploding) done in parallel with 3081, starting out with 168 logic mapped to 20% faster chips ... along with microcode optimization getting to one machine cycle per 370 instruction. 370/165 was about 2mips, 370/168-1, 2.5mips, 370/168-3 (doubled cache size) was 3mips and 3033 4.5mips. As the MIP rates increased ... it typically required more and more concurrently executing applications to keep the system busy (which was driving factor for MVS increasing CSA size).

Starting in the mid-70s, I started pontificating that processor throughput was increasing faster than disk throughput was increasing. In the early 80s, I wrote a memo that between introduction of 360s and 3081, that relative system throughput of disks had declined by a factor of ten times (i.e. processors got 40-50 times faster, disks only got 3-5 times faster). A GPD/disk division executive took exception and assigned the division performance group to refute my claim. After a couple weeks, they came back and basically said I had slightly understated the "problem". That analysis was respun into a SHARE presentation on configuring disk for better system throughput (16Aug1984, SHARE 63, B874). This increasing throughput mismatch between processor and disk was increasing the requirement to have larger numbers of concurrently executing applications (and disks) to keep the CPU busy. Later disk caching and electronic disk come into play.

more recently there have been several notes ... when main memory access latency is measured in count of processor cycles .... it is comparable to the 60s disk access latency when measured in count of 60s processor cycles (i.e. main memory is the new disk).

one of the reasons started seeing technology like hyperthreading and out-of-order execution ... sort of modern hardware multitasking (something else to do when there has been cache miss and having to fetch from main memory).

dasd, ckd, fba, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

posts mentioning MVS "common segment"
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019b.html#25 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018e.html#106 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017b.html#8 BSAM vs QSAM
https://www.garlic.com/~lynn/2016h.html#111 Definition of "dense code"
https://www.garlic.com/~lynn/2016e.html#3 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2015b.html#60 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2015b.html#40 OS/360
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#78 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014i.html#86 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2014d.html#62 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014d.html#54 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013g.html#15 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2013c.html#51 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2012l.html#75 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012j.html#27 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2012j.html#26 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#53 Operating System, what is it?
https://www.garlic.com/~lynn/2012h.html#57 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012e.html#80 Word Length
https://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2012b.html#66 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011l.html#45 segments and sharing, was 68000 assembly language programming
https://www.garlic.com/~lynn/2011k.html#11 Was there ever a DOS JCL reference like the Brown book?
https://www.garlic.com/~lynn/2011h.html#11 History of byte addressing
https://www.garlic.com/~lynn/2011f.html#17 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#79 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2010p.html#21 Dataspaces or 64 bit storage
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard
https://www.garlic.com/~lynn/2010g.html#83 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#36 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#75 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#81 LPARs: More or Less?
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009n.html#74 Best IEFACTRT (off topic)
https://www.garlic.com/~lynn/2009n.html#61 Evolution of Floating Point
https://www.garlic.com/~lynn/2009k.html#52 Hercules; more information requested
https://www.garlic.com/~lynn/2009h.html#33 My Vintage Dream PC
https://www.garlic.com/~lynn/2009c.html#59 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009.html#55 Graphics on a Text-Only Display
https://www.garlic.com/~lynn/2008r.html#32 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2008p.html#40 Opsystems
https://www.garlic.com/~lynn/2008o.html#53 Old XDS Sigma stuff
https://www.garlic.com/~lynn/2008h.html#29 DB2 & z/OS Dissertation Research
https://www.garlic.com/~lynn/2008g.html#60 Different Implementations of VLIW
https://www.garlic.com/~lynn/2008e.html#33 IBM Preview of z/OS V1.10
https://www.garlic.com/~lynn/2008e.html#14 Kernels
https://www.garlic.com/~lynn/2008d.html#69 Regarding the virtual machines
https://www.garlic.com/~lynn/2008c.html#35 New Opcodes
https://www.garlic.com/~lynn/2007t.html#75 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007t.html#16 segmentation or lack thereof
https://www.garlic.com/~lynn/2007r.html#69 CSA 'above the bar'
https://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
https://www.garlic.com/~lynn/2007q.html#68 Direction of Stack Growth
https://www.garlic.com/~lynn/2007q.html#26 Does software life begin at 40? IBM updates IMS database
https://www.garlic.com/~lynn/2007o.html#10 IBM 8000 series
https://www.garlic.com/~lynn/2007k.html#27 user level TCP implementation
https://www.garlic.com/~lynn/2007g.html#59 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2006y.html#16 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006v.html#23 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#23 threads versus task
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006r.html#32 MIPS architecture question - Supervisor mode & who is using it?
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006k.html#44 virtual memory
https://www.garlic.com/~lynn/2006j.html#38 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2005q.html#48 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005f.html#57 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2002m.html#0 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 28 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory

Science center wanted a 360/50 for the blaauw box (actually "cambridge translator box") hardware mods but had to settle for a 360/40 (all spare 360/50s going for faa atc project) .... and did cp40/cms ... which morphs into cp67 when 360/67 standard w/virtual memory becomes available.

360/40 ... more details in Melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist
also referenced recent posts
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet

360/40 had 256kbytes memory ... or 64 4kbyte "pages". virtual memory hardware mods had something similar to the storage protect(/fetch) box ... one entry for each real page that included a 4bit "virtual memory ident" and virtual page number. Dispatching a virtual machine loaded a (4bit) address space number into control register and for each virtual address would do simultaneous lookup up for each real page entry for matching address space number and virtual page number (the index of a matching entry was then the real page number). Some claim that this could be done within the standard 360/40 processing with no additional overhead delay.

360/67 had similar segment&page tables (to what showed up later in 370). The 360/67 had eight entry virtual memory, associative array, dispatching loaded the address of segment table into control register (which also reset/cleared all entries). virtual memory address lookup added 150ns to the standard 750ns 360/67 cycle ... it would search in parallel if any of the eight entries contained the virtual page number and its mapping to real page number. If not, it would then go to the segment/page tables (pointed to by control register) and see if there was valid real page number of the corresponding virtual page, if so it would choose one of the eight entries and replace its value with the latest virtual&real page number. more detail in 360/67 funcchar at bitsavers.
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

165/168/3033 got a little fancier, similar to 360/67 but with 128-entry TLB divided into 32 groups of 4 entries each. (five) bits from the virtual address was used to index one of the 32 groups ... and then a matching search was made of the four entries in that group. It also had a 3bit/7entry saved address space. And every (valid) TLB entry had a 3bit associated address space. When segment table was loaded into control register, it was checked it it was also one of the seven saved address spaces, if not it would replace with one of the seven entries (and invalidate all TBL entries associated with the replaced entry). In a sense the seven saved address space 3bit identifier for each TLB entry was slightly analogous to the 360/40 4bit identifier for each real page.
http://www.bitsavers.org/pdf/ibm/370/funcChar/

Note the full 370 architecture "redbook" (for red color of the 3ring binder) had a lot more details ... it was CMS SCRIPT file and just the 370 principles of operation subset could be printed by specifying a CMS SCRIPT command line option. The full 370 virtual memory architecture had a lot more features than originally shipped. However, the hardware retrofit of virtual memory to 370/165 got into 6month schedule slip. They proposed eliminating a number of features to get back on schedule which was eventually agreed to ... however all the other 370s that had already implemented the full architecture had to remove the eliminated features ... and any software already written to use the eliminated features had to be redone.

some architecture redbook posts
https://www.garlic.com/~lynn/2022.html#59 370 Architecture Redbook
https://www.garlic.com/~lynn/2022.html#56 370 Architecture Redbook
https://www.garlic.com/~lynn/2014h.html#60 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014e.html#3 IBM PCjr STRIPPED BARE: We tear down the machine Big Blue wouldrather you f
https://www.garlic.com/~lynn/2014c.html#56 Computer Architecture Manuals - tools for writing and maintaining- state of the art?
https://www.garlic.com/~lynn/2014.html#17 Literate JCL?
https://www.garlic.com/~lynn/2013i.html#40 Reader Comment on SA22-7832-08 (PoPS), should I?
https://www.garlic.com/~lynn/2013c.html#37 PDP-10 byte instructions, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#52 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2013b.html#20 New HD
https://www.garlic.com/~lynn/2013.html#72 IBM documentation - anybody know the current tool? (from Mislocated Doc thread)
https://www.garlic.com/~lynn/2012l.html#24 "execs" or "scripts"
https://www.garlic.com/~lynn/2012l.html#23 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012j.html#82 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012e.html#59 Word Length
https://www.garlic.com/~lynn/2012.html#64 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#89 Is there an SPF setting to turn CAPS ON like keyboard key?
https://www.garlic.com/~lynn/2011g.html#38 IBM Assembler manuals
https://www.garlic.com/~lynn/2011e.html#86 The first personal computer (PC)
https://www.garlic.com/~lynn/2011e.html#54 Downloading PoOps?
https://www.garlic.com/~lynn/2010k.html#41 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2010h.html#53 IBM 029 service manual
https://www.garlic.com/~lynn/2008d.html#67 Throwaway cores
https://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier
https://www.garlic.com/~lynn/2007u.html#30 folklore indeed
https://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
https://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005n.html#48 Good System Architecture Sites?
https://www.garlic.com/~lynn/2005k.html#1 More on garbage
https://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005i.html#40 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2004k.html#45 August 23, 1957
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 28 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory

After taking a two semester hr intro to fortran/computers, I got student programmer job to redo 1401 MPIO (unit record<->tape front end for 709) in assembler for 360/30. I was given a bunch of manuals and the univ. datacenter shutdown on weekends and I had it all to myself for 48hrs straight (although made monday morning cvlasses hard). I got to design my own monitor, device drivers, error recovery, interrupt handlers, storage management, etc. Within a few weeks I had a box of (2000) cards. I then added assembler option that generated either the stand-alone version (ipl'ed with BPS loader) or the OS/360 version (with get/put and DCB macros). The stand-alone version assembled in 30mins, the OS/360 version assembled in 60mins ... each DCB macro taking 5-6mins each to process. I think this was OS/360 Release 6. I was later told that in order to run in the smallest storage size, the person implementing the OS/360 assembler was told he only had 256byte work area ... so it was extremely disk intensive. Later that was somewhat improved and then Assembler H did a lot of improvement.

Within a year of taking intro class, I was hired fulltime to be responsible for OS/360. The univ had been sold a 360/67 to run TSS/360 replacing 709/1401 ... the 360/30 replaced 1401 as part of transition to 360/67. However TSS/360 never came to production fruition and so the 360/67 ran as 360/65 with OS/360. Note student fortran jobs ran under a second on 709 (tape->tape). Initially with 360/65 OS/360, they ran over a minute. I installed HASP and that cut the elapsed time for student jobs in half. However, starting with OS/360 release 11 ... I started redoing SYSGEN to optimize placement of datasents and PDS members for arm seek, and (PDS directory) multi-track search ... cutting elapsed time by another 2/3rds to 12.9secs. Student fortran jobs never ran faster than 709 until I installed Univ of Waterloo WATFOR monitor. A lot of OS/360 was extremely disk intensive ... simple file open/close SVC involved a whole boat load of 2kbyte modules from SVCLIB.

More than decade later (at IBM) I got brought into large datacenter for large national grocer ... that had multiple loosely-coupled 168s for the differen regions. They were having disastrous throughput problems and all the company experts had been brought through until they got around to me. Initially it was into a classroom with tables piled high with performance activity reports from the various systems. After about 30mins, I started noticing a pattern ... the aggregate I/O summed across all the systems for a specific disk was peaking at 7/sec during the periods they claimed was the worst performance. I asked what the disk was ... and they said it was the application library for all the store controllers which turned out to be a single 3330 with a three cylinder PDS directory. Doing a little envelope calculation, each member load was doing a cyl&half multitrack search (19+9.5 tracks spinning at 60RPS, or two multi-track searches, first .317secs, 2nd .158sec, total .475secs) during which time the disk and controller were locked up. Add in the seek to the PDS directory and then the seek and read to load the module ... effectively all the stores in the country were limited to an aggregate throughput of loading two (or slightly less) store controller apps per second. So the store controller PDS dataset was split into multiple datasets on multiple drives ... then that was replicated into a unique (non-shared) set for each system.

dasd, ckd, fba, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT storage management issues

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT storage management issues
Date: 28 Jan 2022
Blog: Facebook
The 370 (MVT) scenario was extreme fragmentation as well as need for contiguous space ... as well as high overhead. Saw CICS acquiring all resources as it possible could at startup ... and then did its own resource management ... attempting to absolutely minimize as much as possible the use of any (os/360) MFT/MVT (and MVT storage management fragmentation problem increased the longer the application ran).

The contiguous issue also showed up from earliest MVT deployment. Boeing Huntsville had gotten two processor 360/67 duplex for TSS/360 for long running 2250 CADCAM applications. They had to fall back to MVT release 13 with the 360/67 running as two 360/65 processors ... but ran into brick wall with long running 2250 CADCAM and MVT storage management (fragmentation). Boeing Huntsville then crafted a little bit of MVT (release 13) into running in virtual memory mode ... but didn't do any paging, the virtual memory size was the same as the real memory size ... but it allowed them to re-arrange (virtual) storage addresses to provide for required contiguous storage.

trivia: referenced post has some detailed comment about being hired by univ. to be responsible for os/360
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory

... however, before I graduate, I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities. I thought that Renton datacenter was possibly the largest, something like couple hundred million (60s $$$) in IBM 360s ... some politics with Renton datacenter director bringing Renton under CFO control (at the time CFO had small 360/30 for payroll up at Boeing field, although they enlarged the room and installed a 360/67 single processor for me to play with when I wasn't doing other things). The Huntsville 360/67 duplex was also transferred to Seattle.

Can't say what all the bad/poor MVT implementations were. At the science center there was detailed CP67 modeling and measurements of enormous number of factors ... including large number of storage management strategies ... never saw anything similar for MVT.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

post referencing 370 virtual memory justified because of MVT storage management problems:
https://www.garlic.com/~lynn/2011d.html#73

some other recent posts mentioning 370 virtual memory justification
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#82 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021g.html#43 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#25 Execute and IBM history, not Sequencer vs microcode
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021e.html#32 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2019e.html#121 Virtualization
https://www.garlic.com/~lynn/2019e.html#108 Dyanmic Adaptive Resource Manager
https://www.garlic.com/~lynn/2019d.html#120 IBM Acronyms
https://www.garlic.com/~lynn/2019d.html#63 IBM 3330 & 3380
https://www.garlic.com/~lynn/2019d.html#26 direct couple
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2019.html#78 370 virtual memory
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#26 Where's the fire? | Computerworld Shark Tank
https://www.garlic.com/~lynn/2019.html#18 IBM assembler

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues

Amdahl lost a year with advent of virtual memory, but got it back with the advent of the Future System Project ... also mentioned in this description above decision to make all 370s virtual memory
https://www.garlic.com/~lynn/2011d.html#73
and more detail
http://www.jfsowa.com/computer/memo125.htm
above mentions Amdahl departing after ACS/360 was canceled (IBM execs afraid that it would advance state of the art too fast and IBM would loose control of the market) ... end of ACS:
https://people.cs.clemson.edu/~mark/acs_end.html

Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of the 70s:

and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

during FS, internal politics was killing off 370 projects and the lack of new 370 offerings is credited with giving clone system makers their market foothold.

note one of my hobbies after joining IBM (cambridge science center) was enhanced operating systems for internal datacenters (including the IBM world-wide, online sales&marketing HONE systems were long time customer) ... but still got to attend SHARE and visit customers.

The director of one of the largest financial datacenters on the east coast liked me to stop in and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl machine (up until then had been selling into technical/scientific, univ. markets but had yet to break into true-blue commercial market and this would be the first). I was then asked to go spend a year onsite at the customer (to help obfuscate why an Amdahl machine was being ordered), I talk it over with the customer and they said they would like to have me onsite, it wouldn't make any difference about the order, and so I told IBM no. I was then told that the branch manager was good sailing buddy of the IBM CEO, and if I refused, I could forget about having an IBM career, promotions, raises. Not long later, I transfer to IBM San Jose Research on the opposite coast (got to wander around most of silicon valley, ibm datacenters, customers, other computer makers).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory

other trivia: much of my posts, I've archived at garlic ... old mainframe Mar/Apr '05 mag. article (although some info little garbled & gone 404, but lives on at wayback machine)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
and mainframe hall of fame (alphabetical, after IMS vern watts and before CICS bob yelavich)
https://www.enterprisesystemsmedia.com/mainframehalloffame
and knights of vm
http://mvmua.org/knights.html

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory

In 1980, STL was bursting at the seams and were moving 300 people from the IMS group to offsite bldg. They had tried "remote 3270" and found the human factors unacceptable. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff).

In 1988, LLNL (lawrence livermore national laboratory) is playing with some serial stuff and I'm asked to help them get it standardize, which quickly becomes fibre-channel standard (including some stuff I had done in 1980). The POK people finally get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (i.e. 17mbytes/sec, FCS started 1gbit/sec link full-duplex, 2gbit/sec aggregate, 200mbyte/sec). Then some POK people become involved in FCS and define a heavy weight protocol that drastically reduces the native throughput that is eventually released as FICON.

The most recent published "peak I/O" benchmark I can find is for max configured z196 getting 2M IOPS with 104 FICON (running over 104 FCS) ... using emulated CKD disks on industry standard fixed-block disks (no real CKD disks made for decades). About the same time there is a FCS announced for E5-2600 blades (standard in cloud megadatacenters at the time) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS) using industry standard disks.

STL had T3 collins digital radio (microwave) to repeater on the hill back of STL to the roof of bldg12 (on main plant site) ... microwave was then installed between roof of bldg12 to the new bldg ... and channel-extender got a channel from STL to bldg 12 to the roof of their offsite bldg.

There was an interesting side-effect of the channel-extender, The 168s in STL had the 3270 controllers distributed across all the I/O channels with the DASD. The 3270 controllers had slow electronics which resulted in significant channel busy (interfering with disk i/o). All those 3270 controllers were moved to offsite bldg and replaced with a really fast channel-extender box ... drastically reducing channel busy, improving DASD throughput and 10-15% improvement in overall system throughput ... while still handling the same amount of 3270 terminal traffic. There was then the suggestion that even all the in-house 168s have their 3270 controllers placed on channel-extenders.

Another installation for channel-extender (and offsite channel attached 3270 controllers) was for IMS FE group in Boulder being moved to different bldg across major highway ... where (local?) code wouldn't allow microwave ... so a infrared modem was put on the roofs of the two bldgs for the channel-extender channel. Lots of people were predicting that it would have lots of problems because of heavy rain and snow weather in the Boulder area. We had heavy instrumentation including Fireberd bit error tester on side-channel over the link. Turns out when there was a white-out snow storm (when employees couldn't get into work), it started registering a few bit errors. There was a different problem, during hot sunny days, there started to be loss of signal between the two modems. Turns out the sun heating of the taller bldg caused expansion of one side and slight tilt of the bldg ... throwing off the infrared modem position. Some careful re-engineering had to be done to keep the modems in sync.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory

other trivia: I got talked into helping with 16-way multiprocessor 370 and we con'ed the 3033 processor engineers in working on it in their spare time (lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thot it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support. Then some of us were invited to never visit POK again and the 3033 processor engineers ordered to stop being distracted and keep their noses to the grind stone. Note POK doesn't ship 16-way multiprocessor until 2000 with z900 ... almost 25yrs later.

In the late70s & early80s, I was blamed for online computer communication (precursor to modern social media) on the internal network (larger than arpanet/internet until some time mid/late 80s), folklore is when corporate executive committee was told about it, 5of6 wanted to fire me. It really took off spring of 1981 when I distributed trip report of a visit to Jim Gray at Tandem, there were only about 300 that participated, but claims of up to 25,000 reading.

old email about 85/165/168/3033/trout (aka 3090)
https://www.garlic.com/~lynn/2019c.html#email810423

note: after FS implodes, there is mad rush to get stuff back into the 370 product pipelines and quick&dirty 303x&3081 are kicked off in parallel, periodic reference
http://www.jfsowa.com/computer/memo125.htm
also ACS/360 was killed in the 60s, lots here, including towards the end of article, features that show up nearly 25yrs later with ES/9000
https://people.cs.clemson.edu/~mark/acs_end.html

303x channel director is 158-3 engine w/o the 370 microcode and just the integrated channel microcode. a 3031 is two 158-3 engines, one with just the 370 microcode and the other just the integrated channel microcode. 3032 is 168-3 with new covers and reworked to use 303x channel director for external channels (and 3033 starts out 168-3 logic remapped to 20% faster chips). Once the 3033 is out the door the processor engineers start work on trout.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
multiprocessor, smp, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT, EARN, BITNET, Internet

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT, EARN, BITNET, Internet
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet

Edson was responsible for the internal network
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMerc article about Edson (he recently passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)SJMerc article abou
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm

Ed did relatively clean, layered (non-SNA) design and implementation (for VM/RSCS/VNET) which was used for the internal network and later used for bitnet. a problem was the MVS JES2 NJE/NJI networking support, originally done for HASP at a univ. (source had "TUCC" in cols 68-71). It wasn't layered, but intermixed network fields and job control fields, also used spare entries in the HASP 255-entry HASP psuedo device table (typically 160-180 availably entries). In order to include MVS/JES systems, there was an NJE driver done for RSCS/VNET. However, the internal network had quickly exceeded 255 nodes in the 70s (and was about to exceed 1000 at the time of the 1/1/83 interworking cutover, when it had 255 hosts and 100 IMP network nodes) and JES2 would trash any traffic where the origin &/or destination wasn't in its local table ... as a result had to restrict MVS/JES2 to boundary nodes ... to minimize the impact they had on traffic.

The other JES2 issue was because of intermixing networking and job control fields, there was frequent problem when JES2s at different release levels causing either or both of the MVS systems to crash. As a result, there grew up significant enhancements to the internal RSCS/VNET NJE driver that would attempt to convert all JES2 traffic from its origin format to the format required by the directly connected JES2 machine. There is an infamous case of newly installed MVS/JES2 system in San Jose crashing MVS systems in Hursley (England). The management in Hursley blamed it on the Husrely RCSC/VNET people because they didn't know about the change in San Jose and the need to upgrade their RSCS/VNET NJE driver (to keep the Hursley MVS systems from crashing).

Eventually marketing tried to restrict all the BITNET RSCS/VNET systems to NJE drivers (inside IBM the RSCS/VNET systems kept the native drivers for awhile longer, because they had higher throughput over the same speed links). Later 80s, the communication group got involved forcing the conversion of the internal network to SNA (making claims to the corporate executive committee that otherwise the internal network would stop working) ... this was about the time of BITNET->BITNET2 conversion to TCP/IP.

The communicationg group had also fought hard to prevent the release of mainframe TCP/IP support. When that failed, the change tactic and said because the communication group had corporate strategic ownership of everything that crossed datacenter walls, it had to be shipped by them. What shipped got aggregate 44kbytes/sec throughput using nearly whole 3090 processor. I did the support for RFC1044 and in some tuning tests at Cray Research, got sustained channel speed throughput between 4341 and Cray, using only modest amount 4341 processor (nearly 500 times improvement in bytes moved per instruction executed). Later the communication group hired a silicon valley contractor to implement TCP/IP support directly in mainframe VTAM. What he demo'ed had TCP running much faster than (SNA) LU6.2. He was then told that everybody *KNOWS* that LU6.2 is much faster than a *CORRECT* TCP/IP implementation and they would only be paying for a *CORRECT* TCP/IP implementation.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory

Jim Gray was one of the principles behind system/R (original sql/relational dbms), we manage to do technology transfer to Endicott ("under the radar" while company was preoccupied with EAGLE, the next great dbms) for sql/ds ... later when EAGLE implodes ... request is for how fast can system/R be ported to mvs ... later it ships as DB2 ... originally for decision/support only. Fall of 1980, Jim departs for Tandem... palming stuff on me ... including DBMS consulting with IMS group

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#78 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory

3081 trivia (not part of 85/165/168/3033/3090 lineage); some more discussion here:
http://www.jfsowa.com/computer/memo125.htm

note that 3081 was never intended to have single processor version ... they were all going to be multiprocessor ... however, ACP/TPF didn't have multiprocessor support and there was concern that the whole airline market would move to the latest Amdahl single processor machine (which had about the same processing as a two-processor 3081k). An intermediate effort was to modify VM370 when running on a multiprocessor, improving ACP/TPF throughput in virtual machine (trying to increase processing overlap with ACP/TPF virtual machine on one processor and VM370 kernel execution on the other). However, this degraded the throughput for nearly every other VM370 customer running any multiprocessor (not just 3081). They then tried to mask some of the degradation by twiddling some 3270 terminal interactive response time. The problem was that a certain 3-letter gov agency (large, long time CP67 & VM370 customer) was running all high-speed ascii terminals on their multiprocessor machine (so the 3270 finagle to try and mask the degradation had no effect). I then get called in to see how much I can help the customer.

some old email refs:
https://www.garlic.com/~lynn/2007.html#email801006b
https://www.garlic.com/~lynn/2007.html#email801008b
https://www.garlic.com/~lynn/2001f.html#email830420
https://www.garlic.com/~lynn/2006y.html#email860121

one of the problems described in the 830420 email, involved a bug introduced in the CP67->VM370 morph. A virtual machine is dropped from queue if it has not outstanding activity and/or if it has outstanding I/O for "slow-speed" device (aka "long-wait" condition). In CP67, the decision was based on the real device. In the CP67->VM370, that was changed to make the decision based on the virtual device type. It was all fine as long as the virtual device type and the real device type were the same. That broke when 3270 (real) terminals started being used (and the virtual device type was "3215").

other trivia (refers to putting global LRU page replacement algorithm that I originally did for CP67 as undergraduate in the 60s, then had to be periodically re-introduced after it got removed for one reason or another.

global LRU trivia: At SIGOPS (Asilomar, 14-16Dec81), Jim Gray asks me if I can help a Tandem co-worker get his Stanford PHD ... that involved global LRU page replacement. There were some academics that published papers on "local LRU" in the 60s (same time I was doing global LRU) and were pressuring Stanford not to award a PHD on anything involving global LRU. Jim knew that I had studies showing comparison of "local" compared to "global" for the same systems, workloads, and hardware, with "global" outperforming "local". When I went to supply the information, IBM executives said I wasn't allowed to send it. Eventually after nearly a year's delay, I was allowed to send the information.
https://www.garlic.com/~lynn/2006w.html#email821019

I've periodically mentioned that I hoped that the IBM executives believed that they were punishing me (blamed for online computer conferencing on the internal network) ... as opposed to taking sides in an academic dispute.

Note eventually IBM eventually came out with 3083 (3081 with one processor removed) ... originally primarily for the ACP/TPF market
https://en.wikipedia.org/wiki/Transaction_Processing_Facility

smp, multiprocessing and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 29 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#78 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory

370 two processor cache machines would slow the processor cycle down by 10% ... creating some window for the cross cache communication (various cache interaction would slow things down even further) ... so two processor hardware started out at 1.8 times of one processor. VS2 for a long time quoted throughput of two processors at 1.2-1.5 times a single processor ... because broad swaths of kernel code were blocked by "spin-lock", only one processor at a time, the other processor would have to spin-wait.

When Charlie was working on CP67 fine-grain kernel multiprocessor locking (at the science center) ... drastically reducing the probability that two processors need be executing the same locked code at the same time (loosing cycles to spin-locks) ... he invented the compare-and-lock instruction (name/mnemonic selected because CAS are charlie's initials) ... which allows multiple processors to be executing the same code and serialization is controlled by the compare-and-swap instruction. We then tried to convince the POK 370 architecture owners to add compare-and-swap to 370 architecture. They initially rebuffed it, saying that the POK favorite son operating system people (MVT OS360/65MP) claimed that the 360/65MP (spin-lock) test-and-set instruction was sufficient. Challenge was that in order to justify adding compare-and-swap to 370, additional uses had to be found (other than operating system multiprocessor serialization). Thus was born the examples that still appear in principles of operation showing how application multitasking applications (whether running on single or multiple processors) can use it for efficient serialization (like large DBMS and transaction processing systems).

In the morph of CP67->VM370 lots of stuff was simplified and/or dropped (including multiprocessor support). I added it back in originally for US consolidated sales&marketing HONE datacenter in Palo Alto. They had maxed out number of 168s in loosely-coupled, single-system-image configuration (sharing large disk farm) and were still heavily processor limited (most of the applications were done in APL). Some slight of hand coding tricks ... had extremely short MP-related pathlengths ... and with some other cache affinity tricks ... would get increased cache hit ratio that offset the MP effects (not only offseting the MP specific pathlengths, but offsetting the decreased cycle time) and could get two times throughput of a single processor.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine SIE instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine SIE instruction
Date: 30 Jan 2022
Blog: Facebook
old email discussing how SIE was going to be much better on trout/3090 and/or reasons why it was really (, really) bad on 3081
https://www.garlic.com/~lynn/2006j.html#email810630
https://www.garlic.com/~lynn/2003j.html#email831118

... ever heard of paging microcode? (of course SIE was never originally intended for production use, just mvs/xa development)

of course for 3081, VMTOOL&SIE was suppose to be only for internal MVS/XA development and never intended for release (originally POK had convinced corporate to kill vm370, transfer all the people to POK to work/support MVS/XA; Endicott managed to save the vm370 mission, but had to reconstitute development group from scratch). Amdahl had come out with (microcode) hypervisor for their processors (Amdahl single processor was about same as two processor 3081K and Amdahl's two processor was faster than IBM's four processor 3084). IBM then found that customers weren't moving to MVS/XA as expected ... and Amdahl had microcode "hypervisor" (being able to run MVS & MVS/XA concurrently w/o VM, IBM eventual response was PRSM/LPAR which didn't ship until many years later w/3090 in 1988). POK first responds with hacked VMTOOL as VM/MA (migration aid) and VM/SF (system facility). Eventually POK proposes couple hundred people to upgrade VMTOOL to feature/function/performance of VM/370. Endicott has alternative, Rochester sysprog that put full XA support in VM370 ... guess who won the political battle?

pr/sm
https://en.wikipedia.org/wiki/PR/SM

360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#mcode

a couple old SIE threads
https://www.garlic.com/~lynn/2011p.html#113 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#115 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#116 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#117 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#118 Start Interpretive Execution
https://www.garlic.com/~lynn/2012f.html#39 SIE - CompArch
https://www.garlic.com/~lynn/2012f.html#50 SIE - CompArch

some recent posts mentioning SIE, PR/SM, Hypervisor
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#33 138/148
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021k.html#4 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2019b.html#78 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#77 IBM downturn
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#96 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018.html#97 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2017k.html#65 Intrigued by IBM
https://www.garlic.com/~lynn/2017j.html#16 IBM open sources it's JVM and JIT code
https://www.garlic.com/~lynn/2017i.html#54 Here's a horrifying thought for all you management types
https://www.garlic.com/~lynn/2017i.html#43 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017e.html#48 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#88 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#81 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#70 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 30 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#78 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory

... aka had to redeploy global LRU over the decades, this case was after six (corporate award) OIAs given out for the removal of global LRU and then having to put it back in.
https://www.garlic.com/~lynn/2006y.html#email860119

note doing page replacement algorithms as undergraduate ... also got me into other related LRU algorithms for file caches, dbms caches, controller caches, disk caches.

in the late 70s at SJR, implemented a super efficient record I/O trace ... which was used to feed cache model that compared file i/o caching for disk level caches, controller level caches, channel level caches and system level caches. For a fixed amount of electronic store, system level cache always beat dividing it up and spreading around at lower level (which is effectively the same results I found in the 60s for global LRU beating "local LRU" (i.e. partitioning cache always required increasing the total amount of electronic store).

a side effect of the record level trace analysis ... realized that there were also file collections that had "bursty" activity ... i.e. weekly/monthly reports, etc. ... many of the files weren't otherwise needed except for the collective periodic use (found useful later in things like ADSM).

however, one of the early results was getting into conflicts with Tucson over the 3880 controller caches, aka Ironwood & Sheriff for 3380 disks, i.e. 3880-11 8mbyte 4kbyte record cache and 3880-13 8mbyte full track cache.

from long ago and far away:

Date: 08/07/80 08:24:58
To: wheeler

Lynn:

I work in Aids Development in Poughkeepsie in VM modelling & measurement areas (VMPR, SMART, VMAP). Recently, we have been investigating cache dasd and heard about some mods you made (presumably to IOS) which collects and logs 'mbbcchhr' information. We have received a hit ratio analysis program from xxxxxx, who informed us of your work. The point is that we would like to make a package available to the field, prior to fcs, which would project the effect of adding a cache of a given size. Can you give me your opinion on the usability of such a package. I am presuming that most of the work involves updating and re-loading cp...I would like to take the code and try it myself...can it be run second level?? Appreciate your response...


... snip ... top of post, old email index

Date: 08/07/80 07:22:34
From: wheeler

re: collect mods;

CP mods. are a new module (DMKCOL), a new bit definition in the trace flags, a couple new diagnose codes, a new command, and a hit to DMKPAG (so code can distinguish between cp paging and other I/O) and a hook in dmkios. no problem running code 2nd level.

--

1) collected data is useful for general information about I/O characteristics but there are a lot of other data gatherers which provide almost as much info (seek information, but not down to the record level).

2) I guess I don't understand the relative costs for an installation to justify cache. I would think in most cases a ballpark estimate can be made from other data gatherers. It would seem that unless the cache is going to be relatively expensive this may be something of an overkill.

3) From the stand point of impressing a customer with IBM's technical prowess the hit-ratio curves is a fantastic 'gimmick' for the salesman. Part of my view point may be based on having made too many customer calls, I've seen very few decisions really made on technical merit.

4) Hit-ratio curves may be in several cases a little too concrete. An account team will need additional guidelines (fudge factors) to take into account changing load (/growth).

Will send code. The updates are currently against a sepp 6.8/csl19 system. dmkios update should go in w/o problems. updates for new diagnose and command you will have to adapt to your own system (command & diagnose tables have been greatly changed). Also current mods. have the userid of the data collecting virtual machine hardwired to 'MEASURE'.


... snip ... top of post, old email index

page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
getting to play disk engineering posts:
https://www.garlic.com/~lynn/subtopic.html#disk

past posts mentioning DMKCOL
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Benchmark

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Benchmark
Date: 30 Jan 2022
Blog: Facebook
z196 seemed to have been the last where there were real published (industry standard, not actual count of instructions but number of iterations compared to 370/158 assumed to be one mip) benchmark numbers ... since then things got a lot more obfuscated (like just getting percents compared to previous machines). z196 documents have some statement that 1/3 to 1/2 of z10->z196 per processor performance improvement was introduction of memory latency compensating technology (that had been in other platforms for long time), out-of-order execution, branch prediction, etc. in the z196 time-frame, cloud megadatacenter standard E5-2600 blade clocked at 500BIPS (on same benchmark, based on number of iterations compared to 158; aka ten times max configured z196).

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019

• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS).


Industry standard MIPS benchmark for decades has been number of iterations compared to 370/158 (assumed to be 1MIPS) ... not actual count of instructions.

... other trivia: e5-2600 was two 8 core intel chips or 16 processors, 31BIPS/proc

... more trivia: latest "peak I/O" benchmark I can find is for z196, 2M IOPS with 104 FICON (running over 104 FCS) ... to "CKD" disks (however, no real CKD disks have been made for decades, all simulated on industry standard fixed-block disks).

... note: In 1980, STL was bursting at the seams and were moving 300 people from the IMS group to offsite bldg with dataprocessing back to STL datacenter. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff).

In 1988, LLNL (lawrence livermore national laboratory) is playing with some serial stuff and I'm asked to help them get it standardize, which quickly becomes fibre-channel standard (including some stuff I had done in 1980). The POK people finally get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (i.e. 17mbytes/sec, FCS started 1gbit/sec link full-duplex, 2gbit/sec aggregate, 200mbyte/sec). Then some POK people become involved in FCS and define a heavy weight protocol that drastically reduces the native throughput that is eventually released as FICON. There was FCS announced for E5-2600 blade claiming over million IOPS ... two such FCS having higher throughput than 104 FICON (in the z196 "peak i/o" benchmark).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recent e5-2600 posts
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#69 IBM Bus&Tag Channels
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#109 Network Systems
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021j.html#75 IBM 3278
https://www.garlic.com/~lynn/2021j.html#56 IBM and Cloud Computing
https://www.garlic.com/~lynn/2021j.html#3 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021i.html#2 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021f.html#41 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#55 IBM Quota
https://www.garlic.com/~lynn/2021.html#4 3390 CKD Simulation
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT SFS (spool file rewrite)

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT SFS (spool file rewrite)
Date: 30 Jan 2022
Blog: Facebook
Starting in early 80s, I (also) had HSDT with T1 and faster computer links, just the T1 links running nearly 30 times faster than (internal) CJNNET backbone 56kbit links ... but I had to get (VM/370) VNET/RSCS (and vm370 tcp/ip) running much faster. VNET/RSCS used a synchronous interface to disk (spool) for staging traffic ... on an otherwise moderately loaded system, this could be aggregate 5-8 4k records/sec (20-30kbytes/sec or maybe 200-300kbits/sec aggregate, enough for maybe 4-6 56kbit links) ... I would run multiple T1 full-duplex (1.5mbit/direction, 3mbit/full-duplex or 300kbyte/sec/link, 75 4krecords/sec/link). I needed a) asynchronous interface (VNET/RSCS could overlap execution while waiting for disk transfers) b) multiple disk load balancing c) contiguous allocation for larger files with large multi-record transfers (including read-ahead & write-behind) for multi-mbyte/sec throughput. What I did was re-implement the vm370 spool functions in pascal running in a virtual address space ... with whole lot of increased throughput and asynchronous features that VNET/RSCS could use.

In the early 70s, I had written a CMS paged-map filesystem for CP67 ... with a whole lot of optimization features in the CP kernel ... and then migrated it to VM370/CMS. For the HSDT-SFS (vm spool file) implementation, I was able to take advantage of a lot of the throughput functions that I already had in the VM kernel.

... related the communication group was furiously fighting off client/server and distributed computing and attempting to block release of mainframe TCP/IP support. When they lost the TCP/IP battle, they changed their tactic and since they had corporate strategic "ownership" of everything that cross the datacenter walls, TCP/IP had to be release through them. What shipped got 44kbytes/sec throughput using nearly whole 3090 processor. I then did the enhancements for RFC1044 and in some tuning tests at Cray Research between 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processors (something like 500 times improvement in bytes transferred per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

some old posts mentioning HSDT-SFS:
https://www.garlic.com/~lynn/2021b.html#61 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2021b.html#58 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine SIE instruction

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine SIE instruction
Date: 31 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction

... note both could be correct, endicott goes ahead with xa-mode support in vm370, support based on the rochester work ... and POK justifies a couple hundred people to add vm370 feature/function/performance to vmtool ... and sees endicott effort as competitive and gets it shutdown.

... other trivia ... there were three factions in kingston for vm/xa scheduler and they were having constant, large meetings to resolve which faction. I have lots of email exchange with the faction that wanted to add my dynamic adaptive resource management & scheduling (originally done in 60s as undergraduate for cp67, then after IBM decision to start charging for kernel software, releasing it to customers was initial guinea pig; aka after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters was a hobby, in the morph of cp67->vm370 they dropped and/or simplified a lot of stuff ... i then started in 1974 adding it all back in and by 1975 was shipping "csc/vm" for internal datacenters).

made some derogatory references to hudson valley being politically oriented ... not facts ... that in small fraction of the resources devoted to the meetings, they could have implemented all three proposals and did comparison benchmarks.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource manager and scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 31 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#31 370/195

Industry standard MIPS benchmark for decades has been number of iterations compared to 370/158 (assumed to be 1MIPS) ... not actual count of instructions.

I somewhat remember somebody claiming that mainframe benchmarks drying up because IBM started classifying copyright and NDA and threatening legal action, not just MIPS but also TPC benchmarks (at least for mainframes, IBM still does benchmarks for non-mainframes)
http://www.tpc.org/information/who/gray5.asp

there is story that cache is the new main memory and main memory is the new disk ... if 60s disk latency is measured in number of 60s processor cycles ... it is similar to current memory latency measured in current processor cycles (given rise to out-of-order, speculative execution, etc ... the current hardware equivalent to 60s multi-tasking)

note low&mid range 370s were somewhat similar to Hercules .... averaged ten native instructions ("vertical" microcode) for every 370 instruction ... behind ECPS microcode assist originally for virgil/tully (138/148) ... kernel code redone in native code with ten times speed up. May1975 Endicott cons me into doing the analysis ... old post with initial results.
https://www.garlic.com/~lynn/94.html#21

recent reference
https://www.garlic.com/~lynn/2022.html#33 138/148

high-end machines with horizontal microcode was different (because of the way things were implemented in hardware and could overlap operations) ... measured in avg. machine cycles per instruction, 165:2.1, 168:1.6, 3033:1, recent reference
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory

360/370 mcode posts
https://www.garlic.com/~lynn/submain.html#mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine SIE instruction

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine SIE instruction
Date: 31 Jan 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#86 Virtual Machine SIE instruction

I periodically repeat story of senior disk engineer getting talk scheduled at internal, world-wide, annual communication group conference in the late 80s, supposedly on 3174 performance, but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm. The disk division was seeing drop in disk sales with customers moving to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but the communication group (with their corporate strategic responsibility for everything that crossed the datacenter walls) would veto them.

dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

The GPD/Adstar software VP was trying to work around the corporate politics by investing in distributed computing startups that would use IBM disks ... he would periodically ask us to go by his investments and offer assistance.

A few short years later, IBM had gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).

Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

165/168/3033 & 370 virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 165/168/3033 & 370 virtual memory
Date: 31 Jan 2022
Blog: Facebook
see for 370 virtual memory decision
https://www.garlic.com/~lynn/2011d.html#73

... ignored CP67 ... but looked at Simpson/Crabtree (HASP fame) virtual memory MFT results. Simpson then leaves IBM for Amdahl and does a clean room re-implementation (after Baybunch meetings Amdahl people would tell me all sorts of tales about what was going on).

... other VM history
http://www.leeandmelindavarian.com/Melinda#VMHist

Les sent me a paper copy of his 1982 SEAS CP/40 talk and I OCR'ed it
https://www.garlic.com/~lynn/cp40seas1982.txt

other background; Future System was spec'ing single-level store somewhat from tss/360. I had done paged-mapped CMS filesystem for CP67 and deployed internally (later moved to VM370 for internal use) ... and would pontificate that I learned what not to do from tss/360 (and was part of my ridiculing FS). With the implosion of FS ... any sort of page-mapped filesystem got a bad reputation. Simpson/Crabtree virtual memory MFT included page-mapped filesystem (somewhat analogous to what I had done for CMS). VS2 (SVS/MVS) added virtual memory increasing usable memory ... but kept the OS/360 filesystem (requiring SVC0/EXCP to adopt the CP67 method of copying channel programs and substituting real addresses for virtual addresses). Note: Rochester got away with doing simplified single-level store for S/38 for the low-end ... but it didn't scale-up.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
cms paged mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

posts in this thread:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#78 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Navy confirms video and photo of F-35 that crashed in South China Sea are real

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Navy confirms video and photo of F-35 that crashed in South China Sea are real
Date: 31 Jan 2022
Blog: Facebook
Navy confirms video and photo of F-35 that crashed in South China Sea are real
https://taskandpurpose.com/news/navy-confirms-video-photo-f35-crash-uss-carl-vinson-south-china-sea/

... note one of the very early problems identified for the Navy F35 was that the distance from the wheels to the tail hook was too short, the wheels rolling over the arresting wire would depress the wire and the hook would pass over it before it had a chance to bounce back up.
https://en.wikipedia.org/wiki/Arresting_gear

According to the leaked report, the F-35C, the variant developed for the U.S. Navy (and chosen by the UK for its future aircraft carrier), is unable to get aboard a flattop because of its tailhook design issues.
https://theaviationist.com/2012/01/09/f-35c-hook-problems/

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

supposedly adversaries already have detailed specs of our major weapons systems, including F35 ... 1st decade of the century ... adversaries easily danced through the networks, harvesting just about everything with such ease that it was difficult to believe that our institutions could be so cyberdumb (speculating that they allowed all the information to be taken on purpose).

Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://medium.com/war-is-boring/lets-face-it-its-the-cyber-era-and-were-cyber-dumb-30a00a8d29ad
'Hack the Air Force' bug hunting challenge uncovers 120 flaws in websites and services
https://www.zdnet.com/article/hack-the-air-force-bug-hunting-challenge-uncovers-120-flaws-in-websites-and-services/
A list of the U.S. weapons designs and technologies compromised by hackers
https://www.washingtonpost.com/world/national-security/a-list-of-the-us-weapons-designs-and-technologies-compromised-by-hackers/2013/05/27/a95b2b12-c483-11e2-9fe2-6ee52d0eb7c1_story.html
Chinese Hackers Stole Boeing, Lockheed Military Plane Secrets: Feds
http://www.nbcnews.com/news/investigations/chinese-hackers-stole-boeing-lockheed-military-plane-secrets-feds-n153951
Confidential report lists U.S. weapons system designs compromised by Chinese cyberspies
https://www.washingtonpost.com/world/national-security/confidential-report-lists-us-weapons-system-designs-compromised-by-chinese-cyberspies/2013/05/27/a42c3e1c-c2dd-11e2-8c3b-0b5e9247e8ca_story.html
NSA Details Chinese Cyber Theft of F-35, Military Secrets
http://freebeacon.com/national-security/nsa-details-chinese-cyber-theft-of-f-35-military-secrets/
REPORT: Chinese Hackers Stole Plans For Dozens Of Critical US Weapons Systems
http://www.businessinsider.com/china-hacked-us-military-weapons-systems-2013-5
Report: China gained U.S. weapons secrets using cyber espionage
http://www.cnn.com/2013/05/28/world/asia/china-cyberespionage/
FBI: Chinese hacker accessed gold mine of data on F-22, F-35 and 32 U.S. military projects
http://www.washingtontimes.com/news/2014/jul/16/fbi-chinese-hacker-accessed-gold-mine-data-f-22-f-/

some past cyber dumb posts
https://www.garlic.com/~lynn/2019d.html#42 Defense contractors aren't securing sensitive information, watchdog finds
https://www.garlic.com/~lynn/2019b.html#69 Contractors Are Giving Away America's Military Edge
https://www.garlic.com/~lynn/2019.html#27 The American Military Sucks at Cybersecurity; A new report from US military watchdogs outlines hundreds of cybersecurity vulnerabilities
https://www.garlic.com/~lynn/2019.html#22 The American Military Sucks at Cybersecurity; A new report from US military watchdogs outlines hundreds of cybersecurity vulnerabilities
https://www.garlic.com/~lynn/2018f.html#100 US Navy Contractors Hacked by China "More Than A Handful Of Times"
https://www.garlic.com/~lynn/2018d.html#52 Chinese Government Hackers Have Successfully Stolen Massive Amounts Of Highly Sensitive Data On U.S. Submarine Warfare
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2018b.html#112 How China Pushes the Limits on Military Technology Transfer
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2018.html#69 The Next New Military Specialty Should Be Software Developers
https://www.garlic.com/~lynn/2017j.html#44 Security Breach and Spilled Secrets Have Shaken the N.S.A. to Its Core
https://www.garlic.com/~lynn/2017i.html#56 China's mega fortress in Djibouti could be model for its bases in Pakistan
https://www.garlic.com/~lynn/2017i.html#51 Russian Hackers Stole NSA Data on U.S. Cyber Defense
https://www.garlic.com/~lynn/2017g.html#78 This Afghan War Plan By The Guy Who Founded Blackwater Should Scare The Hell Out Of You
https://www.garlic.com/~lynn/2017e.html#77 Time to sack the chief of computing in the NHS?
https://www.garlic.com/~lynn/2017e.html#73 More Cyberdumb
https://www.garlic.com/~lynn/2017e.html#50 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017c.html#47 WikiLeaks CIA Dump: Washington's Data Security Is a Mess
https://www.garlic.com/~lynn/2017c.html#34 CBS News: WikiLeaks claims to release thousands of CIA documents of computer activity
https://www.garlic.com/~lynn/2017c.html#15 China's claim it has 'quantum' radar may leave $17 billion F-35 naked
https://www.garlic.com/~lynn/2016h.html#67 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#28 China's spies gain valuable US defense technology: report
https://www.garlic.com/~lynn/2016h.html#0 Snowden
https://www.garlic.com/~lynn/2016f.html#104 How to Win the Cyberwar Against Russia
https://www.garlic.com/~lynn/2016b.html#95 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#20 DEC and The Americans
https://www.garlic.com/~lynn/2016b.html#19 Does Cybercrime Really Cost $1 Trillion?
https://www.garlic.com/~lynn/2016b.html#8 Cyberdumb
https://www.garlic.com/~lynn/2016b.html#4 Cyberdumb

--
virtualization experience starting Jan1968, online at home since Mar1970

ECPS Microcode Assist

From: Lynn Wheeler <lynn@garlic.com>
Subject: ECPS Microcode Assist
Date: 31 Jan 2022
Blog: Facebook
circa 1980 there was a program to convert a large variety of internal microprocessors to 801/RISC, low/mid-range 370s, controllers, s38 follow-on, etc. For various reasons the efforts floundered and things returned to CISC.

801/risc, iliad, romp, rios, pc/rt, rs/6000 posts
https://www.garlic.com/~lynn/subtopic.html#801

I contributed to white paper showing that instead of 801/risc for the 4361/4381 follow-on to 4331/4341, VLSI technology had advanced to the point where nearly the whole 370 instruction set could be implemented directly in hardware (only a few exceptions).

Earlier, May1975, Endicott had con'ed me into helping with ECPS microcode assist for 138/148. The 135/145 was avg. 10 native/microcode instruction for every 370 instruction. They wanted the 6k bytes of most executed operating code identified for moving to microcode with a ten times speed up. Old post with initial analysis showing 6kbytes of kernel 370 instructions accounted for 80% of kernel cpu time
https://www.garlic.com/~lynn/94.html#21

360/370 mcode posts
https://www.garlic.com/~lynn/submain.html#mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

Processor, DASD, VTAM & TCP/IP performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Processor, DASD, VTAM & TCP/IP performance
Date: 31 Jan 2022
Blog: Facebook
In the early 80s, I wrote a memo that between introduction of 360s and 3081, that relative system throughput of disks had declined by a factor of ten times (i.e. processors got 40-50 times faster, disks only got 3-5 times faster). Example used was cp67 360/67 with 80 users and vm370 3081 typically with 300-400 users (i.e. if it was increase in MIP rate it would be more like 4000 users ... not 400 users).

A GPD/disk division executive took exception and assigned the division performance group to refute my claim. After a couple weeks, they came back and basically said I had slightly understated the "problem". That analysis was respun into a SHARE presentation on configuring disk for better system throughput (16Aug1984, SHARE 63, B874).

dasd, fba, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

In the middle 80s, communication was fighting hard to prevent the release of mainframe TCP/IP support. When that failed, the switched gears and said that since they had corporate strategic ownership of everything that crossed the datacenter walls, TCP/IP had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I did the support for RFC1044 and in some tuning tests at Cray Research between 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed)

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

In late 80s there was study comparing VTAM to unix tcp/ip .... unix tcp/ip had a 5000 instruction pathlength to do the equivalent of a VTAM LU6.2 operation ... which required 160,000 instructions.

In the early 90s, the communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM. What was demo'ed had TCP running significantly faster than LU6.2. He was then told that *everybody* *KNOWS* that LU6.2 is much faster than a *PROPER* tcp/ip implementation and they would only be paying for a *PROPER* implementation.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT Pitches

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT Pitches
Date: 31 Jan 2022
Blog: Facebook
from long ago and far away:

Date: 02/04/87 09:08:06 PST
From: wheeler

I've been invited to Share VM workshop to give a talk (held at Asilomar the week of 2/23) ... I have my SEAS presentation, History of VM Performance.

I'm also scheduled to give a talk on the prototype HSDT spool file system at the VMITE (week of 3/3).

After the ECOT meeting, big pieces of HSDT are up in the air although it is being taken to Krowe tomorrow to see about direct corporate funding.

btw, re: satellites; I don't know if I mentioned but we double checked on some of the satellite uses ... Latenbach must not have talked directly &/or with correct people on whether or not BofA was going with satellites.

Did you read HSDT027 on 9370 networking, I've gotten several inquiries both from Endicott and Raleigh to discuss it.


... snip ... top of post, old email index, NSFNET email

Date: 02/04/87 12:32:44 PST
From: wheeler

looks like i may also be giving the hsdt-wan (hsdt023) talk also at share vm workshop. i've given talk before outside and inside ibm several times (hsdt-wan has been presented to baybunch, several universities, and head of nsf).


... snip ... top of post, old email index, NSFNET email

History Presentation was made at SEAS 5-10Oct1986 (European SHARE, IBM mainframe user group), I gave it most recent at WashDC Hillgang user group 16Mar2011
https://www.garlic.com/~lynn/hill0316g.pdf
recent history presentation post post
https://www.garlic.com/~lynn/2021g.html#46 6-10Oct1986 SEAS
https://www.garlic.com/~lynn/2021e.html#65 SHARE (& GUIDE)

recent HSDT SFS posts
https://www.garlic.com/~lynn/2022.html#85 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2021j.html#26 Programming Languages in IBM
https://www.garlic.com/~lynn/2021g.html#37 IBM Programming Projects
https://www.garlic.com/~lynn/2021b.html#61 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2021b.html#58 HSDT SFS (spool file rewrite)

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Interactive Response

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Interactive Response
Date: 01 Feb 2022
Blog: Facebook
... my scheduler would handle interactive pretty much regardless of cpu utilization ... was all I/O related.

dynamic adaptive resource manager and scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

3033 was supposedly 4.5mp machine ... claims were that (initial) 3081D was two 5mip processors ... but lots of 3081D benchmarks (running on one processor) were slower than 3033. Fairly quickly they doubled the processor cache size and came out with 3081K ... claiming each processor was 7MIPS ... but many single processor benchmarks were about same as 3033 (so 3081k was closer to 9-10MIPS not 14MIPS)

... as in original post ... number of users from 360/67 CP67/CMS to 3081k was proportional to disk throughput increase, not processor throughput increase

Also about that time a co-worker left IBM SJR and was doing contracting work in silicon valley, lots of it for a major VLSI company. He did a lot of work on AT&T C compiler (bug fixes and code optimization) getting running on CMS ... and then ported a lot of the BSD chip tools to CMS. One day the IBM rep came through and asked him what he was doing ... he said ethernet support for using SGI workstations as graphical frontends. The IBM rep told him that instead he should be doing token-ring support or otherwise the company might not find its mainframe support as timely as it has been in the past. I then get a hour long phone call listening to four letter words. The next morning the senior VP of engineering calls a press conference to say the company is completely moving off all IBM mainframes to SUN servers.

IBM then had a number of taskforces to investigate why silicon valley was moving off IBM mainframes, but they weren't allowed to consider the original motivation.

I had done a lot of additional performance work over and above that was shipped to customers in release 3.4 Resource Manager & Scheduler addon ... mid-80s VM370 performance talk given at lots of IBM user group meetings
https://www.garlic.com/~lynn/hill0316g.pdf

and had internal 3081K systems with 300-400 users running 100% cpu utilization and .11sec trivial response and other similar VM370 systems getting .25 sec response. There were lots of studies showing human productivity increase with .25 response ... and best (& rare) MVS/TSO getting even 1sec. response. However the issue was 3272/3277 had .086 hardware response (so human sees .11system + .086 = .197 response). The issue was latest 3270s were 3274/3278 where a lot electronics had moved from terminal back to controller (reducing manufacturing costs), enormously increasing coax protocol chatter and latency, driving hardware response to .3-.5secs (proportional to amount of data in screen operations).

original post and thread
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Latency and Throughput

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Latency and Throughput
Date: 01 Feb 2022
Blog: Facebook
more than might ever want to know, started HSDT project in the early 80s, T1 and faster computer links ... including having T1 satellite link from Los Gatos lab to Clemeti's E&S lab in Kingston,
https://en.wikipedia.org/wiki/Enrico_Clementi
whole boatload of FPS (mini-)supercomputers with 40mbyte/sec disk arrays
https://en.wikipedia.org/wiki/Floating_Point_Systems

Was also working with director of NSF and was suppose to get $20M to interconnect the NSF supercomputer centers ... then congress cuts the budget, some other things happen and eventually release RFP. Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

... internal IBM politics prevent us from bidding on the RFP (in part based on what we already had running), the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid). The winning bid doesn't even install T1 links called for ... they are 440kbit/sec links ... but apparently to make it look like its meeting the requirements, they install telco multiplexors with T1 trunks (running multiple links/trunk). We periodically ridicule them that why don't they call it a T5 network (because some of those T1 trunks would in turn be multiplexed over T3 or even T5 trunks). as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

... while doing high-speed links ... was also working on various processor clusters with national labs (had been doing it on&off dating back to getting con'ed into doing a CDC6600 benchmark on engineering 4341 for national lab that was looking getting 70 for compute farm).

Last product did at IBM was HA/CMP .. started out HA/6000 for NYTimes to move their newspaper (ATEX) from VAXcluster to IBM rs/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) after start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors. Also gets us into ongoing conflicts with the Kingston supercomputer center claiming I shouldn't be allowed to talk to national labs (although the most significant thing they seem to be doing is helping finance Chen's supercomputer company). End of Oct, 1991, senior VP that had been supporting Kingston supercomputer center retires and the projects he had been financing are being audited. We then see an announcement for an internal supercomputing technology conference for Jan1992 (we presume is trolling the company for supercomputing technology).

Old post referencing early Jan1992 in Ellison's (Oracle CEO) conference room on commercial cluster scale-up, 16processor clusters mid1992, 128processor clusters ye1992.
https://www.garlic.com/~lynn/95.html#13

In 1991 I was also participating in the NII meetings at LLNL ... including working with (NSC VP) Gary Christensen
https://en.wikipedia.org/wiki/National_Information_Infrastructure
I was also doing HA/CMP product and working with LLNL and other national labs on technical/scientific cluster scale-up, the LLNL work also involved porting the LLNL filesystem to HA/CMP. Old email about not being able to make a LLNL NII meeting and Gray fills in for me and then comes by and updates me on what went on.
https://www.garlic.com/~lynn/2006x.html#email920129
within something like possibly hrs of that email, cluster scale-up is transferred, announced as IBM supercomputer, and we are told we can't work with anything having more than four processors (we leave IBM a few months later).

NSC
https://en.wikipedia.org/wiki/Network_Systems_Corporation
trivia: NSC was formed by Thornton, Gary and some other CDC people. NSC was later acquired by STK, which was acquired by SUN, which was acquired by ORACLE.

Computerworld news 17feb1992 (from wayback machne) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
17feb92 ibm supercomputer press ... for scientific/technical *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May92 press, cluster supercomputing caught IBM by *SURPRISE*
https://www.garlic.com/~lynn/2001n.html#6000clusters2
15Jun1992 press, cluster computers, mentions IBM plans to "demonstrate" mainframe 32-microprocessor later 1992, is that tightly-coupled or loosely-coupled?
https://www.garlic.com/~lynn/2001n.html#6000clusters3

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: late 90s (well after leaving IBM), Chen is CTO at Sequent and I do some consulting for him (before IBM buys Sequent and shuts it down) ...
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
... earlier while at IBM, I had (also) been involved in SCI (that Sequent used for NUMA-Q)/
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

... other LLNL & latency drift. In 1980 STL was bursting at the seams and they are moving 300 people from the IMS group to offsite bldg with service back to the STL datacenter. I get tasked to do channel extender support so they can place channel attached 3270 controllers at the offsite bldg, with no perceived difference between in-house and offsite human factors/response. A side effect was that the (really slow, high channel busy) 3270 controllers had been spread around all the mainframe (disk) channels, moving the channel attached 3270 controllers remotely and replacing them with a really high-speed box (for all the 3270 activity) had the side-effect of drastically cutting the 3270 related channel busy (for same amount of traffic) ... allowing more disk throughput and improving over all system throughput by 10-15%.

The hardware vendor then wants IBM to release my support, but there is group in POK playing with some serial stuff and get it vetoed (afraid that if it was in the market, it would make getting their stuff released, more difficult).

In 1988, I'm asked to help LLNL get some serial stuff they are playing with, standardized ... which quickly becomes fibre channel standard (including some stuff I had done 1980). Then the POK people get their stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec ... FCS initially 1gbits, full-duplex, 2gbits/sec aggregate, 200mbytes/sec). Then some POK people become involved with FCS and define a heavy-weight protocol that drastically reduces the native throughput that is eventually released as FICON.

The most recent benchmark i've found is max configured z196 "peak I/O" benchmark getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time there was a FCS announced for E5-2600 blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
ficon posts
https://www.garlic.com/~lynn/submisc.html#ficon

recent related posts and threads
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2022.html#93 HSDT Pitches
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#88 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#86 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#85 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#79 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#78 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#69 IBM Bus&Tag Channels
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#17 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O

--
virtualization experience starting Jan1968, online at home since Mar1970

370/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/195
Date: 01 Feb, 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#87 370/195
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195

mainframe processors this century

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019

• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)


Early in the century, "MIPS" running industry standard benchmark program that is number of iterations compared to 370/158 iterations (assumed to be 1MIPS) ... not actual instruction count. Later in the century had to increasingly use throughput percent change from earlier models.

z196 was 80 processors, 50BIPS (thousand MIPS) or 625MIPS/processor ... max configured $30M or $600,000/BIPS

At the same time e5-2600 blade (two 8core intel chips), 500BIPS (ten times z196, same industry standard benchmark program, not actual instruction count), IBM base list price was $1815, $3.60/BIPS (this was before IBM sold off its server/blade business).

Note cloud vendors have been claiming for quite awhile that they've been assembling their own blades at 1/3rd the cost of major vendor price (i.e. $1.20/BIPS)

IBM sells off its server/blade business shortly after industry press says that major server chip vendors were shipping at least half their product directly to cloud operators; cloud operators turning computer hardware into commodity business ... they had so commoditized server costs, that power&cooling was increasingly becoming the major cost for cloud megadatacenters ... and increasingly applying pressure on chip vendors to improve chip power efficiency).

cloud megadatacenters
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

9/11 and the Road to War

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 9/11 and the Road to War
Date: 01 Feb, 2022
Blog: Facebook
9/11 and the Road to War
https://www.newslooks.com/a-life-in-conflict-one-marines-journey-part-iv/

CIA Director Colby wouldn't approve the "Team B" analysis (exaggerated USSR military capability) and Rumsfeld got Colby replaced with Bush, who would approve "Team B" analysis (justifying huge DOD spending increase), after Rumsfeld replaces Colby, he resigns as white house chief of staff to become SECDEF (and is replaced by his assistant Cheney)
https://en.wikipedia.org/wiki/Team_B
Then in the 80s, former CIA director H.W. is VP, he and Rumsfeld are involved in supporting Iraq in the Iran/Iraq war
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including WMDs (note picture of Rumsfeld with Saddam)
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war

VP and former CIA director repeatedly claims no knowledge of
http://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair

because he was fulltime administration point person deregulating financial industry ... creating S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260

In the early 90s, H.W. is president and Cheney is SECDEF. Sat. photo recon analyst told white house that Saddam was marshaling forces to invade Kuwait. White house said that Saddam would do no such thing and proceeded to discredit the analyst. Later the analyst informed the white house that Saddam was marshaling forces to invade Saudi Arabia, now the white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

... roll forward ... Bush2 is president and presides over the huge cut in taxes, huge increase in spending, explosion in debt, the economic mess (70 times larger than his father's S&L crisis) and the forever wars, Cheney is VP, Rumsfeld is SECDEF and one of the Team B members is deputy SECDEF (and major architect of Iraq policy).
https://en.wikipedia.org/wiki/Paul_Wolfowitz

Before the Iraq invasion, the cousin of white house chief of staff Card ... was dealing with the Iraqis at the UN and was given evidence that WMDs (tracing back to US in the Iran/Iraq war) had been decommissioned. the cousin shared it with (cousin, white house chief of staff) Card and others ... then is locked up in military hospital, book was published in 2010 (4yrs before decommissioned WMDs were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US from Iran/Iraq war), had been found early in the invasion, but the information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

... from truth is stranger than fiction and law of unintended consequences that come back to bite you, much of the radical Islam & ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:

There was also a calculated decision to use the Saudis as surrogates in the cold war. The United States actually encouraged Saudi efforts to spread the extremist Wahhabi form of Islam as a way of stirring up large Muslim communities in Soviet-controlled countries. (It didn't hurt that Muslim Soviet Asia contained what were believed to be the world's largest undeveloped reserves of oil.)

... snip ...

Saudi radical extremist Islam/Wahhabi loosened on the world ... bin Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism

Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:

Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting the Shah and swearing hostility against the United States. That same year, the Soviet Union was pouring troops into Afghanistan to prop up a pro-Russian government that was opposed by Sunni Islamist fundamentalists and tribal factions. The United States was supporting Saudi Arabia's involvement in forming a counterweight to Soviet influence.

... snip ...

and internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:

But al-Qaeda did maintain unusually close ties with our allies the Saudis, a fact that the Bush White House worked suspiciously hard to suppress as we went to war with two other countries.

... snip ...

The Danger of Fibbing Our Way into War. Falsehoods and fat military budgets can make conflict more likely
https://www.pogo.org/analysis/2020/01/the-danger-of-fibbing-our-way-into-war/
The Day I Realized I Would Never Find Weapons of Mass Destruction in Iraq
https://www.nytimes.com/2020/01/29/magazine/iraq-weapons-mass-destruction.html

The Deep State (US administration behind formation of ISIS)
https://www.amazon.com/Deep-State-Constitution-Shadow-Government-ebook/dp/B00W2ZKIQM/
pg190/loc3054-55:

In early 2001, just before George W. Bush's inauguration, the Heritage Foundation produced a policy document designed to help the incoming administration choose personnel

pg191/loc3057-58:

In this document the authors stated the following: "The Office of Presidential Personnel (OPP) must make appointment decisions based on loyalty first and expertise second,

pg191/loc3060-62:

Americans have paid a high price for our Leninist personnel policies, and not only in domestic matters. In important national security concerns such as staffing the Coalition Provisional Authority, a sort of viceroyalty to administer Iraq until a real Iraqi government could be formed, the same guiding principle of loyalty before competence applied.

... snip ...

... including kicked hundreds of thousands of former soldiers out on the streets created ISIS ... and bypassing the ammo dumps (looking for fictitious/fabricated WMDs) gave them over a million metric tons (for IEDs).

team b posts
https://www.garlic.com/~lynn/submisc.html#team.b
s&l crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
military-industrial(-congressional) complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
wmd posts
https://www.garlic.com/~lynn/submisc.html#wmds

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine SIE instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine SIE instruction
Date: 02 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#86 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#88 Virtual Machine SIE instruction

in another ibm mainframe group, old email about why SIE instruction for trout/3090 was going to be much better than on 3081 ... for 3081 it was when POK was still working on killing vm370 product and the VMTOOL/SIE was purely for internal MVS/XA development ... and 3081 had to "page" the micrcode for SIE (on entry & exit). old email from long ago far away:
https://www.garlic.com/~lynn/2006j.html#email810630
some other about how trout "1.5" virtual memory "STO" (i.e. Segment Table Origin address)
https://www.garlic.com/~lynn/2003j.html#email831118

When FS imploded there was mad rush getting stuff back into 370 product pipeline, including kicking off quick&dirty 3033 and 3081 in parallel. I had been dealing with the 3033 processor engineers ... and once 3033 was out the door, they started on trout/3090

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

discussion started out about FBA DASD required for 3081 (3310) and 3090 (3370s) ... even when the operating system didn't have FBA support ... i.e. for the service processors ... which FE required once circuits (that would require fault diagnosis) were encapsulated in TCMs.

other recent posts discussion service processors
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2022.html#36 Error Handling

DASD, CKD, FBA, multi-track search, etc ... posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Fiction is a Luddite Literature

From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Fiction is a Luddite Literature
Date: 02 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#50 Science Fiction is a Luddite Literature

John Flannery, the man hired to fix General Electric, inherited a $31 billion ticking time bomb when he replaced longtime CEO Jeff Immelt
https://money.cnn.com/2018/01/18/investing/ge-pension-immelt-breakup/index.html?iid=EL

The Great Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg39/loc1043-47:

CRONY CAPITALIST SLEAZE: HOW THE NONBANK FINANCE COMPANIES RAIDED THE TREASURY The final $600 billion segment of the commercial paper market provided funding to the so-called nonbank finance companies, and it is here that crony capitalism reached a zenith of corruption. During the bubble years, three big financially overweight delinquents played in this particular Wall Street sandbox: GE Capital, General Motors Acceptance Corporation (GMAC), and CIT. And all three booked massive accounting profits based on a faulty business model.

... snip ...

Age of Greed: The Triumph of Finance and the Decline of America, 1970 to the Present
https://www.amazon.com/Age-Greed-Triumph-Finance-Decline-ebook/dp/B004DEPF6I/
pg187/loc3667-70:

When Welch took over GE in 1980, it was the ninth most profitable company in the nation. Now it was first, second, or third. Shareholder value reached $500 billion, more than any other company in America. The stock price was Welch's personal measure of achievement, though he later denied it. The boom of the late 1990s on balance sent the wrong message to American managers: cut costs rather than innovate. Despite its appeal, In Search of Excellence had little true staying power.

pg191/loc3754-60:

In 1977, GE Capital, as it was later called, generated $67 million in revenue with only seven thousand employees, while appliances that year generated $100 million and required 47,000 workers. He hired better managers and supplied GE Credit with a lot of capital, and he had built-in scale--meaning large size--due to GE's assets size and triple-A credit rating. In time, GE Capital became a full-fledged bank, financing all kinds of commercial loans, issuing mortgages and other consumer loans, and becoming a leader in mortgage-backed securities. By the time Welch left in 2000, GE Capital's earnings had grown by some eighty times to well more than $5 billion, while the number of its employees did not even double. It provided half of GE's profits.

pg192/loc3777-79:

In a few brief sentences, Welch had defined a new age for big business. He introduced short-run profit management to GE, understanding that stock market investors trusted little so well as rising profits every calendar quarter. It became the best indication of a company's quality, making it stand out in good times and bad.

pg199/loc3909-13:

GE Capital also enabled GE to manage its quarterly earnings, engaging in the last couple of weeks of every calendar quarter in various trades that could push earnings up on the last day or two before the quarter's end. It was an open secret on Wall Street that this was how Welch consistently kept quarterly earnings rising for years at a time. "Though earnings management is a no-no among good governance types," wrote two CNNMoney financial editors, "the company has never denied doing it, and GE Capital is the perfect mechanism."

... snip ...

.... basically the new corporate mantra was financial engineering ... more financial engineering

pg199/loc3919-25:

Over his tenure, he cut back significantly on research and development--by some 20 percent in the 1990s. In 1993, he told BusinessWeek, "We feel that we can grow within a business, but we are not interested in incubating new businesses." GE Capital itself was built through countless acquisitions. As the CNNMoney writers put it, "Consider first what the company really is. Its strength and curse is that it looks a lot like the economy. Over the decades GE's well-known manufacturing businesses--jet engines, locomotives, appliances, light bulbs--have shrunk as a proportion of the total. Like America, GE has long been mainly in the business of services. The most important and profitable services it offers are financial."

pg200/pg3935-41:

He mostly stopped trying to create great new products, hence the reduction in R&D. He took the heart out of his businesses, he did not put it in, as he had always hoped to do. What made his strategy possible, and fully shaped it, was the rising stock market--and the new ideology that praised free markets even as they failed.

... snip ...

... GE Capital & its securitized mortgages took down the company; then failing to fund workers pension plan

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM PLI

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM PLI
Date: 03 Feb 2022
Blog: Facebook
Some of the MIT 7094/CTSS (following also includes history list of IBM mainframe systems)
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr to do multics (operating system written in PLI)
https://en.wikipedia.org/wiki/Multics
and others went to the IBM science center on the 4th flr and did virtual machines, the internal network, lots of online and performance apps (including what evolves into capacity planning), inventing GML (precursor to SGML & HTML) in 1969, etc.
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

posts about "Thirty Years Later: Lessons from the Multics Security Evaluation"
https://www.garlic.com/~lynn/2002l.html#42
https://www.garlic.com/~lynn/2002l.html#44
https://www.garlic.com/~lynn/2002l.html#45

IBM paper moved here after gone 404 at domino.watson.ibm.com
http://www.acsac.org/2002/papers/classic-multics.pdf
about USAF Multics report from 1974
http://csrc.nist.gov/publications/history/karg74.pdf

I had been pontificating that frequent bugs in C-language applications involved buffer overflows (common in "C", but difficult in PLI) ... many resulting in (internet) vulnerabilities & exploits (didn't happen in MULTICS) I also compared to the original mainframe TCP/IP that had been implemented vs/pascal ... with much fewer bugs.

I had tried doing some semantic/word analysis of the gov. internet exploit/bug reports contracted to Mitre (1999, approx. 1/3rd buffer overflows, 1/3rd automatic scripting, 1/3rd social enginneering) ... I had suggested to Mitre that they try and make the CVE reports a little more structured ... but their response (at the time) was they were lucky to get any written description.
https://www.garlic.com/~lynn/2004e.html#43
then NIST does similar analysis that shows up in Linux magazine, feb2005
https://www.garlic.com/~lynn/2005b.html#20

... and in previous life my wife wrote test environment in pli running on ibm 370 (and found bugs in the IBM PLI compiler) for a pli-like language (8-phase) cross-compiler/assembler for programs that would run on small military computer ... my wife was gov. employee fresh out of univ. of Michigan engineering graduate school, within 2yrs she had quit gov. and went to work for IBM, reporting to Les Cameau (who had transferred from Cambridge to Gburg), at the time owned one of the "Future System" sections ... she loved all the "blue sky" technical discussions at FS meetings but wondered why there was little or nothing about actual implementation. FS ref:
http://www.jfsowa.com/computer/memo125.htm

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#overflow
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 03 Feb 2022
Blog: Facebook
Jim Gray wrote "MIP Envy" as he was leaving IBM for Tandem. I had been blamed for online computer conferencing in the late 70s & early 80s on the internal network. It really took off spring of 198, after I distributed trip report after one of the visits to see Jim at Tandem, only approx. 300 participated but claims that up to 25,000 were reading. From IBMJargon

MIP envy n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.

... snip ...

copy here
https://www.garlic.com/~lynn/2007d.html#email800920
slightly later version
http://jimgray.azurewebsites.net/papers/mipenvy.pdf

also IBMJargon

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

IBMJargon somewhat garbled "MIP envy" since "Tandem Memos" didn't really take off until spring of 1981 when I distributed trip report,

In any case, summer 1981, there were trips to other research institutions to see how IBM Research compared:

Bell Labs Holmdel, Murray Hill
https://www.garlic.com/~lynn/2006n.html#56
Xerox SDD
https://www.garlic.com/~lynn/2006t.html#37
other summary of the visits summer 1981
https://www.garlic.com/~lynn/2001l.html#61

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

other trivia: one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (including what becames the world-wide, online sales&marketing support HONE systems). Then in the morph of cp67->vm370, lots of stuff was simplified and/or dropped (including a bunch of my stuff done at the univ. and after joining IBM as well as CP67 multiprocessor support). In 1974, I start moving all that stuff to VM370 base, initially having a CSC/VM for internal distribution on VM370 Relase 2 base in 1975 (doesn't include multiprocessor) ... some old email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Somehow, AT&T Longlines cuts a deal with IBM and gets a copy of that CSC/VM system (predating the SMP, tightly-coupled, multiprocessor support). Over the years they had added functions and migrated to the latest IBM processors. Come the early 80s and the 3081 ... which was never planned to have single processor support ... and IBM was afraid that the whole airline market was going to move to the latest Amdahl single processor machine (ACP/TPF didn't have tightly-couple support). IBM national marketing manager for AT&T tracks me down because IBM was afraid something similar was going to happen at AT&T, apparently that longlines copy of release2-based CSC/VM had propagated around the company ... and they wanted me to help them move to system with tightly-coupled support.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

Note that the US HONE datacenters had been consolidated in Palo Alto (trivia, when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter) ... and VM370 had been enhanced with single-system image, loosely-coupled support with load balancing and fall-over support. HONE max out 168 systems sharing same disks ... but they are still cpu utilization bottlenecked (nearly all the applications are APL-based). I then add tightly-coupled multiprocessor support for VM370 release3 based CSC/VM, initially for HONE ... so they can add a 2nd processor to each 168 system (but longlines never got a copy).

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

Not long later, I transfer from CSC to SJR (just down the road from HONE). Old post
https://www.garlic.com/~lynn/2006u.html#26
with email about (now) SJR/VM
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006u.html#email800501

also contains old email about RED (full-screen) editor and XEDIT. I asked Endicott why they didn't ship (internal) RED rather than XEDIT, since it was more mature, more function, and faster. Endicott replies that it is the RED author's fault that RED is so much better than XEDIT, so it is his responsibility to make XEDIT as good as RED.
https://www.garlic.com/~lynn/2006u.html#email781103
https://www.garlic.com/~lynn/2006u.html#email790606
https://www.garlic.com/~lynn/2006u.html#email800311
https://www.garlic.com/~lynn/2006u.html#email800312

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 03 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing

online computer conferencing trivia: six copies of approx 300 pages were printed, along with executive summary and summary of the summary and placed in Tandem 3-ring binders and sent to the corporate executive committee (folklore is 5of6 wanted to fire me):

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... snip ...

... took another decade (1981-1992) ... IBM had gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup). Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 03 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#102 Online Computer Conferencing

all the HONE configurators APL based... plus "SEQUOIA" which was a really large APL application ... most userids were automatically dumped into APL SEQUOIA at logon ... imagine sort of a SUPER PROFs for the computer illiterate ... lots of full screen menus ... select application ... which returns to SEQUOIA when done.

Note when US HONE datacenters were consolidated in Palo Alto, it was across the back parking lot from the PA science center. Cambridge had done the port of APL\360 to CP67/CMS for CMS\APL with lots of changes. Then PASC did the port to VM370/CMS for APL\CMS ... and had also done the APL microcode assist for the 370/145 & 5100
https://en.wikipedia.org/wiki/IBM_5100

SEQUOIA was so big, PASC finagled the workspace code so SEQUOIA could be included as part of the shared-code APL application.

other trivia: IBM SE training included part of large group at customer site. after 23jun1969 unbundling announcement where started charging for (application) software, SE services, maint, etc ... they couldn't figure out how to not charge for trainee SEs onsite at the customer. Thus was born the original HONE ... branch office SEs online to CP67 datacenters running guest operating systems in virtual machines. CSC had also done the port of APL\360 to CMS ... and HONE then started deploying APL-based sales&marketing apps ... which came to dominate all activity (and the guest operating system use just withered away).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Performance
Date: 03 Feb 2022
Blog: Facebook
The IBM Cambridge Scientic Center had developed a lot of monitoring, analysis, tuning, performance technologies during the 60s & 70s, including precursor to capacity planning.

After turn of century (had left IBM in early 90s) was brought into large mainframe datacenter (40+ max configured IBM mainframes @$30M, none older than 18months, constant rolling upgrades). They were all running 450K statement cobol app doing real-time financial transactions ... but number of processors needed to finish batch settlement in the overnight window. They had large performance group that had been managing performance for decades ... but had gotten somewhat myopically focused on specific approaches. I identified something that accounted 14% of processing that could be saved (used some other performance analysis approaches from the IBM science center).

There was a consultant brought in at the same time used a different approach. An analytical APL-based system model had been developed at the cambridge science center and first made available on HONE as the performance predictor (in early/mid 70s) ... SEs could enter customer configuration and workload profiles and ask "what-if" questions about specific changes to workload and/or configuration. The consultant had acquired a descendant of the performance predictor during the IBM troubles in the early 90s ... had run it through an APL->C lagnugage converter and was using it for profitable mainframe (not just IBM) datacenter performance business ... and used it to find another 7% processing savings ... 21% aggregate (aka 8-10 max configured IBM mainframes saving for something like $240M-$300M).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning performance predictor & 450k statement cobol app
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

Note while CP67 & VM370 "captured" all CPU utilization ... there was big problem with MVS and the "capture ratio" .... accounted for CPU time compared to elapsed time minus wait state (actual CPU time) ... which could be as low as 40% ... and seemed to be related to VTAM activity.

past posts mentioning "capture ratio"
https://www.garlic.com/~lynn/2022.html#21 Departmental/distributed 4300s
https://www.garlic.com/~lynn/2021c.html#88 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017i.html#73 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017d.html#51 CPU Timerons/Seconds vs Wall-clock Time
https://www.garlic.com/~lynn/2015f.html#68 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014b.html#85 CPU time
https://www.garlic.com/~lynn/2014b.html#82 CPU time
https://www.garlic.com/~lynn/2014b.html#80 CPU time
https://www.garlic.com/~lynn/2014b.html#78 CPU time
https://www.garlic.com/~lynn/2013d.html#14 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#8 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012j.html#71 Help with elementary CPU speed question
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2010m.html#39 CPU time variance
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2008d.html#72 Price of CPU seconds
https://www.garlic.com/~lynn/2008.html#42 Inaccurate CPU% reported by RMF and TMON
https://www.garlic.com/~lynn/2007t.html#23 SMF Under VM
https://www.garlic.com/~lynn/2007g.html#82 IBM to the PCM market
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM PLI

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM PLI
Date: 04 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#100 IBM PLI

Note Les was at the science center, from Melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist

Les Comeau had been thinking about a design for an address translator that would give them the information they needed for the sort of research they were planning. He was intrigued by what he had read about the associative memories that had been built by Rex Seeber and Bruce Lindquist in Poughkeepsie, so he went to see Seeber with his design for the "Cambridge Address Translator" (the "CAT Box"), 29 which was based on the use of associative memory and had "lots of bits" for recording various states of the paging system.

Seeber liked the idea, so Rasmussen found the money to pay for the transistors and engineers and microcoders that were needed, and Seeber and Lindquist implemented Comeau's translator on a S/360 Model 40. 30 Comeau has written:

Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred.

The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle. 31


... snip ...

and Les 82 CP40 talk at SEAS
https://www.garlic.com/~lynn/cp40seas1982.txt

... he had transferred from Cambridge to Gburg ... and "owns" one of the Future System sections when my (future) wife goes to work for him.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

The Cult of Trump is actually comprised of MANY other Christian cults

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Cult of Trump is actually comprised of MANY other Christian cults.
Date: 04 Feb 2022
Blog: Facebook
The Cult of Trump is actually comprised of MANY other Christian cults.
https://threadreaderapp.com/thread/1488552760828116994.html

Dominionism is the theocratic idea that Christians are called by God to exercise DOMINION over every aspect of society by taking control of political and cultural institutions. Believers see the world as a constant battle between God and Satan.

... snip ...

Dominion Theology
https://en.wikipedia.org/wiki/Dominion_theology

... Retiring GOP operative Mac Stipanovich says Trump 'sensed the rot' in Republican party and took control of it
https://www.orlandosentinel.com/politics/os-ne-mac-stipanovich-republican-20191224-tz7bjps56jazbcwb3ficlnacqa-story.html

As for the party, Trump hasn't transformed the party, in my judgment, as much as he has unmasked it. There was always a minority in the Republican party -- 25, 30 percent -- that, how shall we say this, that hailed extreme views, aberrant views. They've always been there, from the John Birchers in the '50s, who thought Dwight Eisenhower was a communist, to the Trump folks today who think John McCain's a traitor. They had different names -- the religious right, tea partiers -- but they've always been there. They were a fairly consistent, fairly manageable minority who we, the establishment, enabled and exploited.

... snip ...

Mac Stipanovich
https://en.wikipedia.org/wiki/Mac_Stipanovich

racism posts
https://www.garlic.com/~lynn/submisc.html#racism

In U.S., Far More Support Than Oppose Separation of Church and State. But there are pockets of support for increased church-state integration, more Christianity in public life
https://www.pewforum.org/2021/10/28/in-u-s-far-more-support-than-oppose-separation-of-church-and-state/

... "fake news" dates back to at least founding of the country, both Jefferson and Burr biographies, Hamilton and Federalists are portrayed as masters of "fake news". Also portrayed that Hamilton believed himself to be an honorable man, but also that in political and other conflicts, he apparently believed that the ends justified the means. Jefferson constantly battling for separation of church & state and individual freedom, Thomas Jefferson: The Art of Power,
https://www.amazon.com/Thomas-Jefferson-Power-Jon-Meacham-ebook/dp/B0089EHKE8/
loc6457-59:

For Federalists, Jefferson was a dangerous infidel. The Gazette of the United States told voters to choose GOD AND A RELIGIOUS PRESIDENT or impiously declare for "JEFFERSON-AND NO GOD."

.... Jefferson targeted as the prime mover behind the separation of church and state. Also Hamilton/Federalists wanting supreme monarch (above the law) loc5584-88:

The battles seemed endless, victory elusive. James Monroe fed Jefferson's worries, saying he was concerned that America was being "torn to pieces as we are, by a malignant monarchy faction." 34 A rumor reached Jefferson that Alexander Hamilton and the Federalists Rufus King and William Smith "had secured an asylum to themselves in England" should the Jefferson faction prevail in the government.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

The Cult of Trump is actually comprised of MANY other Christian cults

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Cult of Trump is actually comprised of MANY other Christian cults.
Date: 04 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#106 The Cult of Trump is actually comprised of MANY other Christian cults.

Onward, Christian fascists. Trump's legacy will be the empowerment of Christian totalitarians
https://www.salon.com/2020/01/03/onward-christian-fascists_partner/

Trump has filled his own ideological void with Christian fascism. He has elevated members of the Christian right to prominent positions, including Mike Pence to the vice presidency, Mike Pompeo to secretary of state, Betsy DeVos to secretary of education, Ben Carson to secretary of housing and urban development, William Barr to attorney general, Neil Gorsuch and Brett Kavanaugh to the Supreme Court and the televangelist Paula White to his Faith and Opportunities Initiative. More importantly, Trump has handed the Christian right veto and appointment power over key positions in government, especially in the federal courts. He has installed 133 district court judges out of 677 total, 50 appeals court judges out of 179 total, and two U.S. Supreme Court justices out of nine. Almost all of these judges were, in effect, selected by the Federalist Society and the Christian right.

... snip ...

William Barr Is Neck-Deep in Extremist Catholic Institutions. His troubles don't only involve his obeisance to Donald Trump. He's a paranoid right-wing Catholic ideologue who won't respect the separation of church and state.
https://www.thenation.com/article/william-barr-notre-dame-secularism/

In a histrionic speech at Notre Dame Law School on Friday, he blamed "secularists" and "so-called progressives" for destroying society and precipitating the crisis of family dissolution, crime, and drugs, while talking of a war between religious and nonreligious Americans. Scary shit.

... snip ...

William Barr is unfit to be attorney general
https://www.washingtonpost.com/opinions/eric-holder-william-barr-is-unfit-to-be-attorney-general/2019/12/11/99882092-1c55-11ea-87f7-f2e91143c60d_story.html

Last month, at a Federalist Society event, the attorney general delivered an ode to essentially unbridled executive power, dismissing the authority of the legislative and judicial branches -- and the checks and balances at the heart of America's constitutional order. As others have pointed out, Barr's argument rests on a flawed view of U.S. history. To me, his attempts to vilify the president's critics sounded more like the tactics of an unscrupulous criminal defense lawyer than a U.S. attorney general

... snip ...

William Barr's Wild Misreading of the First Amendment
https://www.newyorker.com/news/daily-comment/william-barrs-wild-misreading-of-the-first-amendment

William P. Barr just gave the worst speech by an Attorney General of the United States in modern history. Speaking at the University of Notre Dame last Friday, Barr took "religious liberty" as his subject, and he portrayed his fellow-believers as a beleaguered and oppressed minority. He was addressing, he said, "the force, fervor, and comprehensiveness of the assault on religion we are experiencing today. This is not decay; this is organized destruction."

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

--
virtualization experience starting Jan1968, online at home since Mar1970

Not counting dividends IBM delivered an annualized yearly loss of 2.27%

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Not counting dividends IBM delivered an annualized yearly loss of 2.27%.
Date: 05 Feb 2022
Blog: Facebook
Not counting dividends IBM delivered an annualized yearly loss of 2.27%.

IBM: No Longer The Investing Juggernaut Of Old
https://seekingalpha.com/article/4479605-ibm-no-longer-investing-juggernaut-of-old

stock buybacks use to be illegal (because it was too easy for executives to manipulate the market ... aka banned in wake of '29crash)
https://corpgov.law.harvard.edu/2020/10/23/the-dangers-of-buybacks-mitigating-common-pitfalls/

Buybacks are a fairly new phenomenon and have been gaining in popularity relative to dividends recently. All but banned in the US during the 1930s, buybacks were seen as a form of market manipulation. Buybacks were largely illegal until 1982, when the SEC adopted Rule 10B-18 (the safe-harbor provision) under the Reagan administration to combat corporate raiders. This change reintroduced buybacks in the US, leading to wider adoption around the world over the next 20 years. Figure 1 (below) shows that the use of buybacks in non-US companies grew from 14 percent in 1999 to 43 percent in 2018.

... snip ...

Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.

... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/

(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks
https://web.archive.org/web/20140623003038/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st

The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx

(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares

(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Not counting dividends IBM delivered an annualized yearly loss of 2.27%

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Not counting dividends IBM delivered an annualized yearly loss of 2.27%.
Date: 05 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%.

... little biased about Boeing. At univ, within year taking intro to computers/fortran class, was hired fulltime, by the univ. to be responsible for os/360, then before I graduate I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit to better monetize the investment. I thot Renton was possibly largest datacenter, a couple hundred million in IBM 360, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. When I graduate, I join IBM science center, instead of staying at Boeing.

Boeing 100th anniv article "The Boeing Century"
https://issuu.com/pnwmarketplace/docs/i20160708144953115
included long article "Scrappy start forged a company built to last", has analysis of the Boeing merger with M/D ("A different Boeing") and the disastrous effects that it had on the company ... and even though many of those people are gone, it still leaves the future of the company in doubt. One was the M/D (military-industrial complex) culture of outsourcing to lots of entities in different jurisdiction as part of catering to political interests ... as opposed to focusing on producing quality products ... which shows up in the effects that it had on 787.

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout

Unlike Boeing, McDonnell Douglas was run by financiers rather than engineers. And though Boeing was the buyer, McDonnell Douglas executives some how took power in what analysts started calling a "reverse takeover." The joke in Seattle was, "McDonnell Douglas bought Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution

Sorscher had spent the early aughts campaigning to preserve the company's estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing's 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the Hollywood model" for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing's engineers staged a 40-day strike over the McDonnell deal's fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern capitalism. Deregulation means a company once run by engineers is now in the thrall of financiers and its stock remains high even as its planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Not counting dividends IBM delivered an annualized yearly loss of 2.27%

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Not counting dividends IBM delivered an annualized yearly loss of 2.27%.
Date: 05 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%.
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%.

AMEX was in competition with KKR for (private equity) LBO of RJR and KKR wins. KKR runs into trouble and hires away AMEX president to help with RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM Board hires away the AMEX ex-president as CEO who reverses the breakup and uses some of the PE techniques used at RJR, at IBM (gone 404 but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

after turn of century, IBM CEO departs to head up another large private-equity company ... buying up gov. contractors and beltway bandits (including company that will employ Snowden) and hiring prominent politicians to lobby congress to outsource lots of the gov to their companies.
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/

"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a Washington-based global private equity firm whose 2006 revenues of $87 billion were just a few billion below ibm's. Carlyle has boasted George H.W. Bush, George W. Bush, and former Secretary of State James Baker III on its employee roster."

... snip ...

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
posts referencing pensions
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

recent posts mentioning ibm being reorged into "13 baby blues" in preparation for breaking up the company

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

On the origin of the /text section/ for code

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On the origin of the /text section/ for code
Newsgroups: alt.folklore.computers
Date: Sun, 06 Feb 2022 08:21:10 -1000
Dan Espen <dan1espen@gmail.com> writes:

Back in 1964, IBM had TXT records in it's object decks:

https://en.wikipedia.org/wiki/OS/360_Object_File_Format#Record_Types


2001 a.f.c. posts about 360 "TXT" decks and types of cards, ICS, TXT, REP, RLD, END, LDT ... from 60s CP67/CMS manuals
https://www.garlic.com/~lynn/2001.html#14

TXT card format specific, started with similar question (why called "TEXT")
https://www.garlic.com/~lynn/2001.html#60

the wikipedia entry, additionally mentions SYM & XSD (but not ICS & LDT), "SYM" cards were for OS/360 "TESTRAN" ... kind of symbolic debugger.
https://en.wikipedia.org/wiki/OS/360_Object_File_Format#Record_Types

note some of the MIT CTSS/7094 people went to the 5th flr, Project MAC to do MULTICS, others went to the IBM Science Center on the 4th flr and did CP40/CMS (morphs into CP67/CMS when 360/67 becomes available, precursor to VM370), internal network, lots of performance tools (& precursor to capacity planning), invented GML in 1969 (precursor to SGML & HTML, CTSS RUNOFF had been redone for CMS as SCRIPT, and then GML processing was added to SCRIPT), etc.

posts mentioning gml/sgml/etc
https://www.garlic.com/~lynn/submain.html#sgml
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

GM C4 and IBM HA/CMP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: GM C4 and IBM HA/CMP
Date: 06 Feb 2022
Blog: Facebook
1990, GM has the "C4" task force to look at completely remaking themselves and because they were planning to heavily leverage IT, they asked for reps from major IT companies ... I was rep from IBM workstation.

Part of C4 history details, 70s, congress imposes foreign import quota to give enormous profits to US makers that would be used to completely remake themselves, however they just pocketed the money and continued business as usual. 1980, there was call for 100% unearned profit tax on the US industry. Part of it was US industries were taking 7-8yrs to come out with new model (two efforts in parallel, offset a couple yrs so it looked like something more frequent, cosmetic changes in between). Foreign makers had cut that in half in the 80s, and in 1990 were in the process of cutting it in half again (18-24months) ... being able to adapt faster to new technology &/or customer preferences (Offline, I would chide the IBM mainframe rep what did they plan on contributing since mainframes had many of the same problems). One example was corvette, had tight design tolerances and over 7-8yr period original parts no longer available and they had redesign delays (to make things fit) ... especially since parts businesses had been spun off.

This somewhat akin to (quick&dirty) 3033 & 3081 were kicked off in parallel after Future System implodes and the mad rush to get stuff back into the 370 product pipeline (once 3033 was out the door 1977, the 3033 group starts on 3090, ships 1985, aka 8yrs).

c4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

At the time I was doing HA/CMP (2yrs from start to first ship)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

IBM PowerHA SystemMirror (formerly IBM PowerHA and HACMP) is IBM's solution for high-availability clusters on the AIX Unix and Linux for IBM System p platforms and stands for High Availability Cluster Multiprocessing. IBM's HACMP product was first shipped in 1991 and is now in its 20th release - PowerHA SystemMirror for AIX 7.1.

... snip ...

IBM Located in the scenic Hudson River Valley of New York State, IBM Poughkeepsie began in 1941 (2016 article)
https://www.linkedin.com/pulse/ibm-located-scenic-hudson-river-valley-new-york-state-elmontaser/

In February of 1993, IBM announced the scalable POWERparallel System. Products began shipping in the late summer with general availability of the "SP1" announced in September of 1993. In April of 1994, IBM announced the "SP2" based on POWER2 microprocessors. These machines have features that make them particularly well-suited for the commercial marketplace.

... snip ...

I had been involved with national lab cluster supercomputing off&on back to Jan1979 when asked to do national lab (cdc6600) benchmark on engineering 4341, they were looking at getting 70 for compute farm.

In late 80s, we got HA/6000 product, originally for NYTimes to move their newspaper system (ATEX) off VAXcluster to RS/6000. I renamed it HA/CMP when started going technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors. Old post about Jan1992 HA/CMP commercial cluster scale-up in Ellison's (Oracle CEO) conference room; 16-way by mid92, 128-way by ye92
https://www.garlic.com/~lynn/95.html#13

Besides HA/CMP at national labs, was also working on migrating their supercomputer filesystems to HA/CMP ... and NII meetings at LLNL.
https://en.wikipedia.org/wiki/National_Information_Infrastructure
email about having conflict with LLNL NII meeting, and one of the other vendors coming by to fill me in on what happened
https://www.garlic.com/~lynn/2006x.html#email920129

within possibly hrs, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Possibly contributing was the mainframe DB2 group was complaining that if we were allowed to go ahead, it would be at least 5yrs ahead of them. I had also been asked to contribute a section to the corporation continuous available strategy document, but it got pulled when both Rochester (AS/400) and POK (mainframe) complained (that they couldn't meet requirements).

Computerworld news 17feb1992 (from wayback machne) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7 17Feb1992 press, announced for scientific and technical *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May1992 press, cluster supercomputing caught IBM by "surprise"
https://www.garlic.com/~lynn/2001n.html#6000clusters2
15Jun1992 press, cluster computers, mentions IBM plans to "demonstrate" mainframe 32-microprocessor later 1992, is that tightly-coupled or loosely-coupled?
https://www.garlic.com/~lynn/2001n.html#6000clusters3

z900 16-processors not until 2000; z990 32-processors 2003. I've periodically mentioned got involved in 16-way (tightly-coupled) 370 mainframe in the 70s, and we con'ed the 3033 processor engineers to work on it in their spare time (lot more interesting than remapping 168 to 20% faster chips). Everybody thought it was great until somebody told head of POK that it could be decades before POK favorite son operating system (MVS) had (effective) 16-way support. Then some of us were invited to never visit POK again, and the 3033 processors engineers directed to stop being distracted.

Note, end of OCT1991, senior executive backing Kingston supercomputer center (major thing seems to be providing funding for Chen Supercomputing company), retires and then there are audits of his projects, and Kingston effort significantly changed along with internal conference announced for Jan1992 (trolling for supercomputer technology). Trivia: late 90s, I do some consulting for Chen, at the time, CTO at Sequent (this was before IBM bought Sequent and shut them down).

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

posts mentioning ha/cmp distributed lock manager (& VAXcluster API):
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019e.html#11 To Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2018d.html#69 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017b.html#82 The ICL 2900
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2014.html#73 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#44 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013m.html#87 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2013m.html#86 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2012d.html#28 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010n.html#82 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010k.html#54 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2010b.html#32 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009m.html#84 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009m.html#39 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009k.html#67 Disksize history question
https://www.garlic.com/~lynn/2009k.html#36 Ingres claims massive database performance boost
https://www.garlic.com/~lynn/2009h.html#26 Natural keys vs Aritficial Keys
https://www.garlic.com/~lynn/2009b.html#40 "Larrabee" GPU design question
https://www.garlic.com/~lynn/2009.html#3 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2008r.html#71 Curiousity: largest parallel sysplex around?
https://www.garlic.com/~lynn/2008k.html#63 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008i.html#18 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008d.html#70 Time to rewrite DBMS, says Ingres founder
https://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2007v.html#43 distributed lock manager
https://www.garlic.com/~lynn/2007v.html#42 Newbie question about db normalization theory: redundant keys OK?
https://www.garlic.com/~lynn/2007s.html#46 "Server" processors for numbercrunching?
https://www.garlic.com/~lynn/2007q.html#33 Google And IBM Take Aim At Shortage Of Distributed Computing Skills
https://www.garlic.com/~lynn/2007n.html#49 VLIW pre-history
https://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
https://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#61 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007c.html#42 Keep VM 24X7 365 days
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006j.html#20 virtual memory
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/aadsmore.htm#time Certifiedtime.com
https://www.garlic.com/~lynn/aadsm28.htm#35 H2.1 Protocols Divide Naturally Into Two Parts
https://www.garlic.com/~lynn/aadsm27.htm#54 Security can only be message-based?
https://www.garlic.com/~lynn/aadsm26.htm#17 Changing the Mantra -- RFC 4732 on rethinking DOS
https://www.garlic.com/~lynn/aadsm21.htm#29 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm16.htm#22 Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

--
virtualization experience starting Jan1968, online at home since Mar1970

On the origin of the /text section/ for code

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On the origin of the /text section/ for code
Newsgroups: alt.folklore.computers
Date: Sun, 06 Feb 2022 11:33:22 -1000
Johann 'Myrkraverk' Oskarsson <johann@myrkraverk.invalid> writes:

The Open Watcom project still has documentation typeset in Watcom, or Waterloo, GML format. I am unsure if that has any relationship with this 1969 GML.

The source code for the document processor has been lost, and is being recreated.


re:
https://www.garlic.com/~lynn/2022.html#111 On the origin of the /text section/ for code

CTSS runoff
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
SCRIPT history
https://en.wikipedia.org/wiki/SCRIPT_(markup)
In 1968 "IBM contracted Stuart Madnick of MIT to write a simple document preparation ..."[10][1] to run on CP/67.[11] He modeled it on MIT's CTSS RUNOFF.[12][13] In 1974, William Dwyer at Yale University ported the CP-67 version of Script to the Time Sharing Option (TSO) of OS/360 under the name NSCRIPT.[14] The University of Waterloo rewrote and extended NSCRIPT as Waterloo SCRIPT,[15] also in 1974, making it available for free to CMS and TSO users for several releases before eventually charging for new releases.

GML&SGML history
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language
https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language
SGML descended from IBM's Generalized Markup Language (GML), which Charles Goldfarb, Edward Mosher, and Raymond Lorie developed in the 1960s. Goldfarb, editor of the international standard, coined the "GML" term using their surname initials.[5] Goldfarb also wrote the definitive work on SGML syntax in "The SGML Handbook".[6] The syntax of SGML is closer to the COCOA format.[clarification needed] As a document markup language, SGML was originally designed to enable the sharing of machine-readable large-project documents in government, law, and industry. Many such documents must remain readable for several decades'a long time in the information technology field. SGML also was extensively applied by the military, and the aerospace, technical reference, and industrial publishing industries. The advent of the XML profile has made SGML suitable for widespread application for small-scale, general-purpose use.

Above doesn't mention in late 70s, an IBM SE (in LA) implemented SCRIPT (newscript) for TRS-80.

GML->HTML
http://infomesh.net/html/history/early/
references Waterloo SCRIPT GML User's Guide ... URL gone 404, but Waterloo script (6jun1990)
https://csg.uwaterloo.ca/sdtp/watscr.html
Waterloo SCRIPT is a rewritten and extended version of a processor called NSCRIPT that had been converted to OS and TSO from CP-67/CMS SCRIPT. The original NSCRIPT package is available from the SHARE Program Library. Waterloo obtained NSCRIPT in late 1974 as a viable alternative to extending ATS to meet local requirements. The local acceptance of Waterloo SCRIPT has continued to provide the motivation for additional on-going development.

posts mentioning gml/sgml/etc
https://www.garlic.com/~lynn/submain.html#sgml
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

On the origin of the /text section/ for code

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On the origin of the /text section/ for code
Newsgroups: alt.folklore.computers
Date: Sun, 06 Feb 2022 12:41:29 -1000
Douglas Miller <durgadas311@gmail.com> writes:

A lot of things would have made more sense than TXT. OBJ, EXE, etc. I'm not seeing a reason that a group of IBM engineers would have chosen TXT without a compelling reason.

re:
https://www.garlic.com/~lynn/2022.html#111 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#113 On the origin of the /text section/ for code

the os/360 ("02"/12-2-9) "TXT" cards were just data for the loader, most of the others were loader control. ESD cards were names, 1) entry, indicating start &/or other points into the program ... or 2) external, indicating symbolic location in some other program. I guess could have used 3letter "DAT" ... if text hadn't already been in use for loader "data".

There were OS/360 2,3,7 card loaders that could handle a single TXT deck (ESD/END). In some assembler source programs they had "PUNCH" cards at the front that would prefix the assembler output with a 2 or 3 card loader (making it self loading program).

In the OS/360 BPS loader (maybe 60-70 cards prefixed with a 3?card loader) supported multiple program loading. it had a table with 255 slots for ESD entry points. For an external ESD, it would search the loader table with the same name to get the associated address.

At one time, I had a card tray of assembler output and started to exceed the 255 entry for the BPS loader ... & had various hacks to keep adding stuff overloading a single symbolic entry (to keep within BPS 255 slot table).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Newt Gingrich started us on the road to ruin. Now, he's back to finish the job

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Newt Gingrich started us on the road to ruin. Now, he's back to finish the job
Date: 06 Feb 2022
Blog: Facebook
Newt Gingrich started us on the road to ruin. Now, he's back to finish the job.
https://www.washingtonpost.com/opinions/2022/02/04/newt-gingrich-started-us-road-ruin-now-hes-back-finish-job/

'Bipartisanship' Is Dead in Washington. That's Fine. Why the halo around an idea that barely worked when it existed?
https://www.politico.com/news/magazine/2021/05/28/bipartisan-congress-dead-washington-491372

On CNN, Fareed called out political strife and conflict got much worse with speaker Gingrich. In Jan1999, after we were asked to help try and prevent the coming economic mess (we failed). One of the things we were told was that there has always been conflict between the two parties, but they could put their differences aside and come together to do things for the country. Gingrich weaponized the political process, everything came to be about party advantage (the other party had to loose even if it damaged the country), and the level of party conflict and strife got significantly worse.

The Man Who Broke Politics; Newt Gingrich turned partisan battles into bloodsport, wrecked Congress, and paved the way for Trump's rise. Now he's reveling in his achievements.
https://www.theatlantic.com/magazine/archive/2018/11/newt-gingrich-says-youre-welcome/570832/
How Newt Gingrich Crippled Congress. No single person bears more responsibility for how much Americans hate Congress than Newt Gingrich. Here's what he did to it.
https://www.thenation.com/article/how-newt-gingrich-crippled-congress/
'Combative, Tribal, Angry': Newt Gingrich Set The Stage For Trump, Journalist Says
https://www.npr.org/2018/11/01/662906525/combative-tribal-angry-newt-gingrich-set-the-stage-for-trump-journalist-says

Note during Obama's first term, republican members of congress repeatedly claimed in public that their primary goal was to make sure Obama didn't have a 2nd term (... and doing everything possible to obstruct all legislative efforts)

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act

some past posts mentioning Gingrich:
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021e.html#11 George W. Bush Can't Paint His Way Out of Hell
https://www.garlic.com/~lynn/2021d.html#8 A Discourse on Winning and Losing
https://www.garlic.com/~lynn/2021d.html#4 The GOP's Fake Controversy Over Colin Kahl Is Just the Beginning
https://www.garlic.com/~lynn/2021c.html#93 How 'Owning the Libs' Became the GOP's Core Belief
https://www.garlic.com/~lynn/2021c.html#51 In Biden's recovery plan, an overdue rebuke of trickle-down economics
https://www.garlic.com/~lynn/2021.html#29 How the Republican Party Went Feral. Democracy is now threatened by malevolent tribalism
https://www.garlic.com/~lynn/2019c.html#21 Mitch McConnell has done far more to destroy democratic norms than Donald Trump
https://www.garlic.com/~lynn/2019b.html#45 What is ALEC? 'The most effective organization' for conservatives, says Newt Gingrich
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets
https://www.garlic.com/~lynn/2018f.html#40 America's electoral system gives the Republicans advantages over Democrats
https://www.garlic.com/~lynn/2018f.html#28 America's electoral system gives the Republicans advantages over Democrats

--
virtualization experience starting Jan1968, online at home since Mar1970

On the origin of the /text section/ for code

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On the origin of the /text section/ for code
Newsgroups: alt.folklore.computers
Date: Mon, 07 Feb 2022 09:52:23 -1000
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

Still, you have to bootstrap the supervisor... :-)

re:
https://www.garlic.com/~lynn/2022.html#111 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#113 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#114 On the origin of the /text section/ for code

... as mentioned upthread both the Multics group (5th flr) and the IBM science center (4th flr) and come over from the MIT CTSS/7094.

the initial CP67 I got at the univ. had all the assembler source on OS/360 ... assembler output txt decks were punched and arranged in card tray prefixed by BPS loader. The cards were IPL'ed, when everything was loaded into memory, the LDT card had execution transfer to "CPINIT".

CPINIT would write initial IPL text to disk, and then a copy of memory to disk. IPL'ing the disk would bring in CPINIT but at an entry that reversed the write to read.

360 IPL read 24bytes into location 0, assumed to be PSW and two I/O CCWs ... and continues the I/O with transfer to first CCW. When I/O finishes, it loads the "PSW".

Shortly later, the CP group had moved all the source to CMS ... and assembler output was CMS txt files. Could have a CMS exec that "punched" to a virtual punch which was transfered to a virtual reader (instead of the real punch) and the virtual reader IPL'ed and written to disk (which could either be a disk for a test system, or the production system disk to update the production system for the next real IPL).

An 80x80 image of the card files (BPS loader followed by all the txt files) could also be written to tape. I got in habit of keeping "production system" tapes where the first file was IPL'ed (to restore that system) ... followed by all the CMS source and other files that went into making that specific production system.

I was able to discover ... that on transfer from the BPS loader, that it passed the address of the ESD table and number of entries in registers ... and did some fiddling that copied the table to the end of the CP67 kernel ... including it in the image written to disk.

I mentioned running into the BPS loader 255 ESD external limit and all sorts of hacks to work around it. Later, at the science center I found dusty file cabinet that had the source for the BPS loader ... and was able to update the BPS loader to handle more ESD entries.

Morph from CP67->VM370 was similar ... except all module names had "DMK" prefix and CPINIT became DMKCPI.

Inside IBM an IOS3270 version of the "green card" was done, I've done a quick&dirty converstion to HTML ... this is "fixed storage"
https://www.garlic.com/~lynn/gcard.html#4

0: 8 byte IPL PSW
8: 8 byte IPL CCW1
16: 8 byte IPL CCW2


card note: CCW1 would normally read 80-byte image to fixed address, could be instructions or more CCWs (10) or instructions or combination of instructions and CCWs. If more CCWs then CCW2 could be a "tic" command that continued the I/O channel program with the additionsl CCWs.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

GM C4 and IBM HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: GM C4 and IBM HA/CMP
Date: 07 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP

Corvette, not quality, different tight tolerances, they were talking about shock absorbers, other parts that use to be made by GM company ... but parts had been spun off, between the original design and final delivery ... shock absorbers (and other parts) had changed size&shape and no longer fit in the original design. Different from quality control.

... reference I use in John Boyd meetings

How Toyota Turns Workers Into Problem Solvers
http://hbswk.hbs.edu/item/how-toyota-turns-workers-into-problem-solvers

To paraphrase one of our contacts, he said, "It's not that we don't want to tell you what TPS is, it's that we can't. We don't have adequate words for it. But, we can show you what TPS is."

We've observed that Toyota, its best suppliers, and other companies that have learned well from Toyota can confidently distribute a tremendous amount of responsibility to the people who actually do the work, from the most senior, experienced member of the organization to the most junior. This is accomplished because of the tremendous emphasis on teaching everyone how to be a skillful problem solver.


... snip ...

.... trivia: in early/mid-80s, our HSDT project was having some custom equipment built to our spec by companies on the other side of pacific ... and would periodically visit them to see how things were going. They liked to show off advanced technology projects with other (Japanese) companies ... including Toyota.

I periodically mention that the Friday before one such visit ... got email from Raleigh announcing a new "high-speed" discussion forum with the following definitions:

low-speed 9.6kbits/sec,
medium speed 19.2kbitts/sec,
high-speed 56kbits/sec, and
very high-speed 1.5mbits/sec.


monday morning on wall of conference room on the other side of pacific, there were these definitions:

low-speed <20mbits/sec,
medium speed 100mbits/sec,
high-speed 200mbits-300mbits/sec,
very high-speed: >600mbits/sec


... snip ...

With import quotas, imports realized that they could sell that many at high end of market ... rather at entry/low end (switching kind of cars also contributes to them cutting elapsed time to come out with new models in half). No pressure at low end of the market, allowed US makers to nearly double price over couple years. Customer earnings didn't increase, so loans had to increase from 36m to 60m-72m. Banks wouldn't do the longer loans w/o increase in warranties. The poor US quality was killing the makers w/warranty costs ... forcing them into improving quality (not so much because foreign competition had better quality).

c4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

GM C4 and IBM HA/CMP

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: GM C4 and IBM HA/CMP
Date: 07 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#117 GM C4 and IBM HA/CMP

Consulting with Chen was at Sequent, by late 90s, Chen Supercomputer had been shutdown and he was CTO at Sequent ... offhand I don't remember any other names at Sequent (this was before IBM bought Sequent and shut it down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
... earlier while at IBM, I had (also) been involved in SCI (that Sequent used for NUMA-Q)/
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

I did have some dealings at Cray Supercomputers in the 80s, but not with Chen. Communication group was fiercely fighting off client/server and distributed computing and were trying to block the release of mainframe TCP/IP support. When they lost, they changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I did enhancements for RFC1044 and in some tuning tests at Cray between 4341 and Cray ... got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Amazon Just Poured Fuel on the Web3 Fire

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amazon Just Poured Fuel on the Web3 Fire
Date: 07 Feb 2022
Blog: Facebook
Amazon Just Poured Fuel on the Web3 Fire. Profit in Amazon's cloud-computing division clocked in at nearly $18 billion last quarter. Web3 proponents are doubling down to take a piece of the pie.
https://www.thestreet.com/technology/amazon-just-handed-web3-a-massive-win

... could use credit card to (automagically) spin up a supercomputer (42nd largest in the world) on demand ... long ago (from 2011)
http://news.cnet.com/8301-13846_3-57349321-62/amazon-takes-supercomputing-to-the-cloud

... current list .... AWS cloud Products
https://aws.amazon.com/products/?aws-products-all.sort-by=item.additionalFields.productNameLowercase&aws-products-all.sort-order=asc&awsf.re%3AInvent=*all&awsf.Free%20Tier=*all&awsf.tech-category=*all
global infrastructure
https://aws.amazon.com/about-aws/global-infrastructure/?hp=tile&tile=map
elastic compute scaling
https://docs.aws.amazon.com/ec2/index.html?nc2=h_ql_doc_ec2
Amazon Linux
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-ami-basics.html

AWS launches new EC2 instance type for high performance computing tasks. The new Hpc6a instances are "purpose-built" to provide a cost-effective option for customers seeking cloud-based access to high-performance computing's demanding hardware requirements.
https://www.zdnet.com/article/aws-launches-new-ec2-instance-type-for-high-performance-computing-tasks/

misc ... large cloud operation will have dozen or more megadatacenters around the world ... each one with half million or more blade servers having millions (or tens of millions) of cores, and staffed with 80-120 people (enormous automation).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Series/1 VTAM/NCP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Series/1 VTAM/NCP
Date: 07 Feb 2022
Blog: Facebook
PNB trivia: Before I graduate, I'm hired into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment) ... Renton datacenter had couple hundred million in IBM 360, 360/65s were arriving faster than they could be installed, boxes constantly staged in hallways around machine room (747#3 was flying skies of Seattle getting FAA flt certification). When I graduate, I join IBM science center instead of staying at Boeing.

In the 80s, the 60s Boeing IBM marketing rep is senior/consulting marketing rep and dealing with PNB. I'm con'ed into trying to turn out PNB VTAM+NCP emulation, implemented on Series/1, as type-1 IBM product. IBM communication group was infamous for internal dirty tricks ... and several people tried to anticipate every one ... what communication group then did can only described as truth is stranger than fiction. In any case, post with part of PNB presentation at spring '86 COMMON user group meeting
https://www.garlic.com/~lynn/99.html#70

post with part of my presentation at SNA Architecture Review Board meeting in Raleigh Oct86 (taunting the tiger?, SNA ARB executive only wanted to know who authorized me to give the talk, many people in the meeting expressed the opinion that it was much better than what they were working on).
https://www.garlic.com/~lynn/99.html#67

some recent S/1 VTAM/NCP posts
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021j.html#14 IBM SNA ARB
https://www.garlic.com/~lynn/2021i.html#83 IBM Downturn
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2021c.html#91 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2019d.html#114 IBM HONE
https://www.garlic.com/~lynn/2019d.html#109 IBM HONE
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2019.html#52 Series/1 NCP/VTAM
https://www.garlic.com/~lynn/2019.html#2 The rise and fall of IBM
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#94 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2017j.html#109 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017j.html#93 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017i.html#52 IBM Branch Offices: What They Were, How They Worked, 1920s-1980s
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2017f.html#90 pneumatic cash systems was Re: [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#59 The ICL 2900
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT & Clementi's Kinston E&S lab

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT & Clementi's Kinston E&S lab
Date: 08 Feb 2022
Blog: Facebook
early/mid-80s, HSDT had a T1 (1.5mbits/sec) satellite link between Los Gatos and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (this was different and not related to the Kingston's "supercomputer" effort). His lab had boatload of FPS boxes:
https://en.wikipedia.org/wiki/Floating_Point_Systems

Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

In our HSDT dealings with NSF director to do the interconnect for the NSF supercomputer centers ... included meetings with Wilson (and other univ. and national labs). At the time, FPS boxes had 40mbyte/sec disk arrays (when IBM fastest channel was 3mbytes/sec).

In 1988, I was asked to help LLNL (national lab) standardize some serial stuff they were playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980, initially 1gbit/sec full duplex, 2gbit/sec aggregate, 200mbyte/sec). IBM eventually gets their serial stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec).

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
ibm's ficon (built on FCS) posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE LSRAD Report

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE LSRAD Report
Date: 08 Feb 2022
Blog: Facebook
2011 I scanned the dec1979 SHARE "Towards More Usable Systems: The LSRAD Report, Large Systems Requirements for Application Development" for bitsavers ... a problem was that the copyright law in 1jan1978 increased protection to 70yrs (otherise it would have been out of copyright) and I had devil of time finding somebody that would approve putting up LSRAD on bitsavers. Bitsaver share directory
http://www.bitsavers.org/pdf/ibm/share/
LSRAD Report
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf

This is a report of the SHARE Large Systems Requirements for Application Development (LSRAD) task force. This report proposes an evolutionary plan for MVS and VM/370 that will lead to simpler, more efficient and more usable operating systems. The report is intended to address two audiences: the users of IBM's large operating systems and the developers of those systems.

.. snip ...

trivia: in 1974, CERN had done a head-to-head between MVS and VM/370 and published report at SHARE. Inside IBM, the report was classified "IBM Confidential - Restricted" ... aka available on need to know only.

also from that era, other SHARE trivia:
http://www.mxg.com/thebuttonman/boney.asp
from above:

Words to follow along with... (glossary at bottom)

If it IPL's then JES won't start,
And if it gets up then it falls apart,
MVS is breaking my heart,
Maybe things will get a little better in the morning,
Maybe things will get a little better.
The system is crashing, I'm having a fit,
and DSS doesn't help a bit,
the shovel came with the debugging kit,
Maybe things will get a little better in the morning,
Maybe things will get a little better.
Work Your Fingers to the Bone and what do you get?
Boney Fingers, Boney Fingers!


... from glossary

$4K - MVS was the first operating system for which the IBM Salesman got a $4000 bonus if he/she could convince their customer to install VS 2.2 circa 1975. IBM was really pissed off that this fact became known thru this

... snip ...

past posts mentioning LSRAD:
https://www.garlic.com/~lynn/2015f.html#82 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#82 Vintage IBM Manuals
https://www.garlic.com/~lynn/2013e.html#52 32760?
https://www.garlic.com/~lynn/2012p.html#58 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012f.html#58 Making the Mainframe more Accessible - What is Your Vision?
https://www.garlic.com/~lynn/2011p.html#146 IBM Manuals
https://www.garlic.com/~lynn/2011p.html#22 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#15 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#14 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#70 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#89 Make the mainframe work environment fun and intuitive
https://www.garlic.com/~lynn/2011.html#88 digitize old hardcopy manuals
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2007d.html#40 old tapes
https://www.garlic.com/~lynn/2006d.html#38 Fw: Tax chooses dead language - Austalia
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE LSRAD Report

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE LSRAD Report
Date: 08 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report

long winded history: TYMSHARE started making their CMS-based online computer conferencing free to SHARE in Aug1976 as VMSHARE ... archives
http://vm.marist.edu/~vmshare
I cut deal with TYMSHARE to get monthly tape dump of all VMSHARE files for putting up on the internal network & systems (including the world-wide online sales&marketing support HONE systems). The hardest problem was the lawyers who were afraid that internal IBMers would be contaminated exposed to customer information (which might be different than what executives were feeding them).

trivia: would frequently visit TYMSHARE, on one visit they demonstrated ADVENTURE game ... that they had found on Stanford PDP10, copied to their PDP10 and then ported to VM/CMS. They had story that when CEO found out, he said that games weren't proper for online business service and had to be removed ... he changed his mind when told games had started accounting for 1/3rd of TYMSHARE revenue. I started making ADVENTURE online inside IBM.

more trivia: then I was blamed for online computer conferencing on the internal network. It really took off sprint of 1981, after I distributed a trip report about visit to Jim Gray at Tandem (only maybe 300 participated, claims upwards of 25,000 were reading). We print six copies of some 300 pages, along with executive summary and summary of the summary, packaged in Tandem 3-ring binders and send them to the corporate executive committee (folklore is 5of6 wanted to fire me). One of the outcomes was official sanctioned IBM software and moderated forums ... although there was a joke that periodically half of all postings were mine. Then observation that my posting activity became significantly moderated in later years. There was also a researcher paid to sit in the back of my office for nine months taking notes on how I communicated, face-to-face, telephone, got copies of all my incoming and outgoing email and logs of all my instant messages. Results were internal IBM reports, conference papers and talks, books and Stanford PHD (joint with language and computer AI) ... from IBM JARGON:
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... and from summary of summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... took another decade (1981-1992) ... IBM had gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).

Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online, world-wide sales&marketing HONE systems
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP and Mid-range market

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP and Mid-range market
Date: 08 Feb 2022
Blog: Facebook
IBM sold 4300s into the same mid-range market as DEC VAX ... and in similar numbers for small unit number orders ... big difference was large corporations with orders of hundreds of vm/4341s for placing out in departmental areas (so prevalent inside IBM that departmental conference rooms started to becomes scarce commodity) ... sort of the leading edge of the coming distributed computing tsunami. Old post with decade of DEC VAX, sliced & diced by model, year, US/non-US:
https://www.garlic.com/~lynn/2002f.html#0
mid-80s, workstation and PC servers were starting to take over the mid-range market.

other trivia: ... jan1979, I got asked to do (cdc6600) benchmark on engineering 4341 for a national lab looking at getting 70 for compute farm ... sort of leading edge of the coming cluster supercomputing tsunami.

Ed (responsible for the technology for the internal network, also used for the corporate sponsored bitnet)
https://en.wikipedia.org/wiki/Edson_Hendricks
and I transfer from the cambridge science center to San Jose Research in 1977. Ed/Gillmore SJMN article gone behind paywall, but lives free at wayback machine (Ed had transferred to San Diego by this time, and has since passed Aug2020)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm

IN 1980, some engineers at International Business Machines Corp. tried to sell their bosses on a forward-looking project: connecting a large internal IBM computer network to what later became the Internet. The plan was shot down, and IBM's leaders missed an early chance to grasp the revolutionary significance of this emerging medium.

... snip ...

Also from wayback machine, some references off Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) and was working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... internal IBM politics prevent us from bidding on the RFP, the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87). As regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

other trivia: later in the 80s, the IBM communication group was fiercely fighting off client/server and distributed computing and trying to block the release of mainframe TCP/IP support. When they lost they changed tactics and said that since they had IBM strategic ownership of everything that cross datacenter walls, TCP/IP had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I did the support for RFC1044 and in some tuning tests at Cray Research between IBM 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processing time (something like 500 times increase in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP and Mid-range market

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP and Mid-range market
Date: 08 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#123 TCP/IP and Mid-range market

At the 1jan1983 cut-over from IMPS/host to internetworking there were approx 100 IMPS and 255 hosts ... at the time the internal network was rapidly approaching 1000 hosts. Old post with corporate locations that added one or more links during 1983:
https://www.garlic.com/~lynn/2006k.html#8

One of the HSDT "problems" was corporate requirement that all internal network links had to be encrypted ... standard corporate links were 56kbits (or slower ... because that was fastest that the communication group products ran) and weren't too hard to find link encryptors (although when they crossed national boundaries there was all sort of grief) ... but I hated what I had to pay for T1 link encryptors and faster encryptors were almost impossible to find ... so I became involved in link encryptors that ran at least 3mbytes/sec (not mbits) and cost no more than $100 to build. The corporate encryption group then claimed it seriously weakened DES crypto standard. It took me 3months to figure out how to explain to tell them what was going on. It was a hollow victory, I was told that there was only one organization in the world that was allowed to use such crypto. I could make all I wanted, but they all had to be sent to address on the east coast. It was when I realized there were 3kinds of crypto: 1) the kind they don't care about, 2) the kind you can't do, 3) the kind you can only do for them.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning 3kinds crypto
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#8 IBM Travel

--
virtualization experience starting Jan1968, online at home since Mar1970

On the origin of the /text section/ for code

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On the origin of the /text section/ for code
Newsgroups: alt.folklore.computers
Date: Tue, 08 Feb 2022 11:56:19 -1000
Douglas Miller <durgadas311@gmail.com> writes:

There were a lot of different "binary" card formats used by various systems. Just today I was reading about one of the earlier IBM 700 systems, where it (if I understood it correctly) formatted some *rows* on the punch card as binary data, sort of like turning the data 90 degrees from "normal". This would make sense if you think of the card reader as starting with row 9 and reading all columns of the row in parallel. so, the format allowed for special punches in row 9, read first, that directed how those columns were interpreted on other rows (8, 7, ... 0, 11, 12).

re:
https://www.garlic.com/~lynn/2022.html#111 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#113 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#114 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#116 On the origin of the /text section/ for code

I mentioned upthread implementing 1401 MPIO on 360/30 ... univ had 709/1401 with 709 tape->tape and 1401 doing unit record front end (tapes manually move between tape drives on the different systems). the univ had been sold 360/67 (for tss/360) to replace 709/1401 with 360/30 temporary interim replacing just 1401 pending 360/67 (tss/360 never came to production fruition, so ran as 360/65 with os/360).

Reading cards to tape, I had to recognize binary (two 6bit bytes/column) and BCD (simple character encode per column) ... read default BCD and if error, reread as binary (80 cols. mapped to 160bytes) ... BCD&binary written to 7track tape (6bits + parity).

similarly on output, had to recognize whether images (read from tape) were punched BCD or binary.

Note: inside IBM, a 360 "green card" was implemented in CMS IOS3270 (full screen 3270 terminal). I've since done a quick and dirty converstion to HTML
https://www.garlic.com/~lynn/gcard.html
card/reader punch I/O for 3504/3505/3525 (same as 2540)
https://www.garlic.com/~lynn/gcard.html#23
2540
https://en.wikipedia.org/wiki/IBM_2540

other trivia: Univ. had sense marked cards for class registration .... large number of tables around the perimeter of the gym for class signups ... cards would fill maybe 30-40 trays total. they were then fed through mark sense & printed/punched. I wrote part of the program for registration ... cards were fed into 2540 card reader for the middle stacker. These were all" manila" cards ... the punch side of the 2540 had colored edge cards ... if there was something wrong with a registration card, a blank color edge card would be punched to the middle stacker (behind card with problem). It was then going through all the trays looking for the color edge cards for problem registration that needed fixing.

more trivia: account of how 360 originally was suppose to be an ASCII machine ... but ASCII unit record gear was late ... so (temporarily) went as EBCDIC (but never recovered) ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

--
virtualization experience starting Jan1968, online at home since Mar1970

On why it's CR+LF and not LF+CR [ASR33]

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: On why it's CR+LF and not LF+CR [ASR33]
Newsgroups: alt.folklore.computers
Date: Tue, 08 Feb 2022 12:17:47 -1000
John Levine <johnl@taugh.com> writes:

They also had 33's and early Unix had tty modes that added the needed delays.

when cp67 was originally installed at the univ, it had automagic terminal recognition for 1050&2741 (using terminal controller "SAD" CCW to switch line port-scanner type). The univ. had some number of (ascii) 33&35 teletypes ... so I had ascii terminal support to CP67 ... and had to change the number of NULL delay characters because different line-speed ... but I did extend the automagic terminal type ... in theory any type terminal can be connected to any port.

I then wanted to have a single dial-in number ... hunt group
https://en.wikipedia.org/wiki/Line_hunting

for all terminals. Didn't quite work since I could switch line scanner for each port (on IBM telecommunication controller), IBM had took short cut and hard wired line speed for each port (TTY was different line speed from 2741&1052). Thus was born univ. project to do a clone controller, built a mainframe channel interface board for Interdata/3 programmed to emulate mainframe telecommunication controller with the addition it could also do dynamic line speed determination. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces. Interdata (and later Perkin/Elmer) sell it commercially as IBM clone controller. Four of us at the univ. get written up responsible for (some part of the) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

I fiddled some one byte calculations in ASCII support (assuming no ASCII would be more than 255, science center was picking up and shipping most of the code I was doing, to customers). Van Vleck was supporting MIT Urban Systems Lab CP67 (in tech sq, opposite the bldg that multics & science center were in). He changes the maximum ASCII terminal line-length to 1200 (for some device down at harvad) and CP67 crashes 27 times in one day (because of the one byte fiddle, line length calculations were invalid and was over running buffers).
https://www.multicians.org/thvv/360-67.html

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE LSRAD Report

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE LSRAD Report
Date: 08 Feb 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report

was frequently offending all sorts of people ... and many wanted to fire me ... however, one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... I could wander around (many times "under radar") at all sorts of datacenters ... including the world-wide, online sales&marketing HONE systems. There had been numerous attempts to convert HONE from VM370 to MVS ... in the 80s, somebody decided that the conversions all failed because HONE was running my systems. HONE was then told they had to convert to standard vanilla product supported VM370 system (what would HONE do if I was ever run over by a bus) ... assumed that once that conversion was done, it would make it easier to convert HONE to MVS.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent posts mentioning csc/vm &/or sjr/vm
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#86 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#37 Error Handling
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#17 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#11 System Availability
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#79 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#78 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#67 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021h.html#54 PROFS
https://www.garlic.com/~lynn/2021h.html#46 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#3 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#30 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021e.html#25 rather far from Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021b.html#80 AT&T Long-lines
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2021b.html#15 IBM Recruiting

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Career

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Career
Date: 08 Feb 2022
Blog: Facebook
partial Long winded list .... As undergraduate, redid OS/360 sysgens to order datasets and PDS members to optimize arm seek and multitrack searches, cut 2/3rds off elapsed time for student jobs. Rewrote loads of CP67 code, initially cutting CP/67 CPU overhead for OS/360 benchmark from 534 seconds to 113 seconds (421 seconds). Did dynamic adaptive resource management/scheduling algorithms, page replacement algorithms, added ordered seek queuing and rotational optimization to CP67 DASD code ... bunch of other stuff.

After joining IBM, one of my hobbies was enhanced production operating systems for lots of internal datacenters (including the internal, world-wide, sales&marketing support HONE systems). In the morph from CP67->VM370, the development group dropped and/or significantly simplified a lot of stuff (including much of the work I had done as undergraduate), SHARE resolutions to IBM kept requesting that I be allowed to put them back into VM370.

Wrote some of VS/Repack, originally internal tool that tracked execution and storage use and would do semi-automatic program reorganization to optimize execution in virtual memory environment, heavily used by many internal groups moving OS/360 software to VS1 & VS2, finally released to customers in 1977, as was some of my stuff as VM370 Resource Manager.

Wanted to demonstrate that REX (long before released as REXX) wasn't just another pretty scripting language ... selected IPCS (large "dump reader" assembler program), objective was to re-implement in REX with ten times the function and ten times the performance (slight of hand to make interpreted REX run faster than assembler) working half time over 3months. Finished early, so started a library of automated scripts that looked for common failure signatures. For some reason it was never released to customers, even though is was in use by nearly every internal datacenter and PSR.

Did scientific/technical and commercial scale-up for our last product at IBM, HA/CMP. Had been asked to write a section for IBM's corporate continuous availability strategy document, but it got pulled when both Rochester (AS/400) and POK (mainframe) complained (that they couldn't meet the requirements). Cluster scale-up was then transferred and announced as IBM supercomputer and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

After leaving IBM, was brought in as consultant into small client/server startup, two former Oracle people (that we had worked with on HA/CMP commercial cluster scale-up) were there responsible for something called "commerce server" and wanted to do payment transactions on the server. The startup had also invented this technology they called "SSL" they wanted to use, its now frequently called "electronic commerce". I had complete authority for everything between servers and the financial payment networks (but could only make recommendations on the client/server side, some of which were almost immediately violated).

e-commerce gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

I did a talk, "Why The Internet Isn't Business Critical Dataprocessing", based on the compensating processes and code I had to do for "electronic commerce" ... that Postel (Internet Standards Editor)
https://en.wikipedia.org/wiki/Jon_Postel
would sponsor at ISI & USC graduate school.


science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource&scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algorithms posts
https://www.garlic.com/~lynn/subtopic.html#clock
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

--
previous, next, index - home