List of Archived Posts

2010 Newsgroup Postings (07/12 - 08/02)

Old EMAIL Index
Honoree pedigrees
TSS (Transaction Security System)
taking down the machine - z9 series
Did a mainframe glitch trigger DBS Bank outage?
Does an 'operator error' counts as a 'glitch?
Z/OS 31bit or 64bit
Idiotic programming style edicts
Idiotic programming style edicts
Age
Titles for the Class of 1978
Titles for the Class of 1978
Idiotic programming style edicts
Old EMAIL Index
Age
Age
History--automated payroll processing by other than a computer?
History--automated payroll processing by other than a computer?
Old EMAIL Index
Old EMAIL Index
Old EMAIL Index
Titles for the Class of 1978
Old EMAIL Index
OS idling
OS idling
Idiotic programming style edicts
Root Zone DNSSEC Deployment Technical Status Update
OS idling
Mainframe Hacking -- Fact or Fiction
zPDT paper
How much is the Data Center charging for each mainframe user?
Wax ON Wax OFF -- Tuning VSAM considerations
OS idling
History of Hard-coded Offsets
Age
TSSO - Hardcoded Offsets - Etc
Great things happened in 1973
Mainframe Hacking -- Fact or Fiction
Who is Really to Blame for the Financial Crisis?
Age
Who is Really to Blame for the Financial Crisis?
History--automated payroll processing by other than a computer?
IBM zEnterprise Announced
PROP instead of POPS, PoO, et al
PROP instead of POPS, PoO, et al
PROP instead of POPS, PoO, et al
Age
C-I-C-S vs KICKS
Who is Really to Blame for the Financial Crisis?
James Gosling
C-I-C-S vs KICKS
Mainframe Hacking -- Fact or Fiction
Age
Who is Really to Blame for the Financial Crisis?
C-I-C-S vs KICKS
Mainframe Hacking -- Fact or Fiction
Who is Really to Blame for the Financial Crisis?
A mighty fortress is our PKI
A mighty fortress is our PKI
A mighty fortress is our PKI
Who is Really to Blame for the Financial Crisis?
Mainframe Slang terms
A mighty fortress is our PKI
A mighty fortress is our PKI, Part II
A mighty fortress is our PKI, Part II
the Federal Reserve, was Re: Snow White and the Seven Dwarfs
the Federal Reserve, was Re: Snow White and the Seven Dwarfs
A mighty fortress is our PKI, Part II
Who is Really to Blame for the Financial Crisis?
Who is Really to Blame for the Financial Crisis?
A slight modification of my comments on PKI
A slight modification of my comments on PKI
A slight modification of my comments on PKI
A mighty fortress is our PKI, Part II
CSC History
Location of first programmable computer
History of Hard-coded Offsets
Five Theses on Security Protocols
Five Theses on Security Protocols
Five Theses on Security Protocols
Idiotic programming style edicts
A mighty fortress is our PKI
Five Theses on Security Protocols
Five Theses on Security Protocols
CSC History

Old EMAIL Index

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 12 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/lhwemail.html

Bob Adair had created a flavor of tape archive at the science center in the 70s. i had taken a whole bunch of tapes with me when in transferred to the west coast in '77. I had gotten into regularly backing up stuff ... and it became more formal after a researcher was hired for 9 months to sit in the back of my office, go with me to meetings, take notes on phone conversations and face-to-face communication. They also got copies of all my incoming and outgoing email as well as logs of all instant messaging ... which required putting together a little more structured archiving of email in the early 80s.

The researcher occurred after I was blamed for computer conferencing on the internal network in the late 70s and early 80s (besides corporate internal report, it was also a stanford PHD thesis ... joint between language and computer AI ... as well as some number of papers and books on computer mediated conversation). some past posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

misc. past posts mentioning internal network (larger than arpanet/internet from just about the beginning until possibly late 85 or early 86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Most of the stuff prior to '77 was lost when the Almaden research lab had a "glitch" in their tape library in the mid-80s (even when the stuff involved multiple replicated tapes) ... with lots of allocated tapes getting mounted as "scratch" and overwritten. I had been contacted by Melinda just prior to that about original cp67/cms multiple-level source maint. procedures ... which I managed to pull off and send her just prior to the tapes being lost.

Somewhat as follow-on to what Adair had done earlier at Cambridge ... I did the original CMSBACK ... which went thru a number of internal releases/distributions ... some past email refs:
https://www.garlic.com/~lynn/lhwemail.html#cmsback

before morphing into workstation datasave for customers (subsequently morphed into ADSM and currently called TSM). some past references to backup/archive
https://www.garlic.com/~lynn/submain.html#backup

--
virtualization experience starting Jan1968, online at home since Mar1970

Honoree pedigrees

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 12 July, 2010
Subject: Honoree pedigrees
Blog: Order of Knights of VM
rexx was still in relatively (internal) infancy and I thot I could demonstrate usefulness of rexx by recoding IPCS in rexx (written in assembler) ... with ten times the function and ten times the performance ... doing it in less than half time over 3 months. it was also leading edge of OCO-wars ... so part of the scenario ... when it shipped (as replacement for existing IPCS) ... it would also ship with full (rexx) source.

it eventually was being used by every PSR and nearly every internal datacenter ... but for some reason, was not able to get it shipped as replacement IPCS. I eventually managed to get approval to give user group presentations on how I did the implementation. Within a couple months of the first presentation ... other versions were starting to appear in the wild. misc. past posts mentioning the activity
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

TSS (Transaction Security System)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TSS (Transaction Security System)
Newsgroups: bit.listserv.ibm-main
Date: 13 Jul 2010 08:12:18 -0700
joarmc@SWBELL.NET (John McKown) writes:
TSS could be Top Secret Security - a replacement for RACF from CA. It is still an active product. CA could likely tell you more.

Could you possibly mean TSS/360? It was an IBM operating system for the S/360. There was also a TSS/370. Time Sharing System is what it stood for. Some information here:
https://en.wikipedia.org/wiki/TSS/360


aka, official product for the 360/67 virtual memory (i.e. pretty much 360/65 with hardware address translation added).

science center had been hoping to win project mac ... with virtual memory system. project mac (aka something of follow-on to ctss) went to GE & multics.

the science center decided to do a virtual memory system anyway ... they tried to get 360/50 to modify with hardware address translation ... but all the spare 360/50s were going to air traffic control system ... so they had to settle for 360/40 that they modified ... and built (virtual memory, virtual machine) cp/40. when a 360/67 machine was finally available, cp/40 morphed into cp/67. folklore is that at some point, there were a 100 people working on tss/360 for every person working on cp67/cms (& something about large number of people contributing to extremely bloated software).

some amount of gory details are available in melinda's virtual machine history found here:
http://www.leeandmelindavarian.com/Melinda#VMHist

in various formats:
http://www.leeandmelindavarian.com/Melinda/25paper.listing
http://www.leeandmelindavarian.com/Melinda/25paper.ps
http://www.leeandmelindavarian.com/Melinda/25paper.pdf

bitsaver has some number of old tss/360 documents
http://www.bitsavers.org/pdf/ibm/360/tss/

as well as 360/67 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

a couple recent posts mentioning CTSS:
https://www.garlic.com/~lynn/2010k.html#48 GML
https://www.garlic.com/~lynn/2010k.html#55 GML
https://www.garlic.com/~lynn/2010k.html#61 GML
https://www.garlic.com/~lynn/2010k.html#69 GML

other past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

later tss/370 effort got a special deal with at&t to do a stripped down low-level tss/370 kernel (SSUP) that would have unix layered on top.

--
virtualization experience starting Jan1968, online at home since Mar1970

taking down the machine - z9 series

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: taking down the machine - z9 series
Newsgroups: alt.folklore.computers
Date: Tue, 13 Jul 2010 11:58:06 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
and demise of future system

re:
https://www.garlic.com/~lynn/2010k.html#57 taking down the machine - z9 series

somewhat related to comments about aftermath of future system failure ... some of the business units got "business executives" ... somewhat replacing people that had technical/engineering background.
https://www.garlic.com/~lynn/2001f.html#33

other posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

one of the (business executive) characteristics seemed to be a much greater concern for image and career. the los gatos lab had been built in the 60s on a couple hundred acres some distance from the main plant site (at some point considered the most scenic lab in the company). a new leader of the disk division decided that they would rather have their offices in the los gatos lab ... rather than the main plant site ... and set about having a wing of the los gatos lab converted for their use.

about 20 admin & staff were going to be part of the move ... the designated wing was renovated more appropriate(?) for executive use (than engineers) including having the tile floor (in the executive section) overlayed with carpeting (the different territories were then very clearly delineated by the type of floor covering). there was folklore that the walls and windows of the executive section were also retrofitted with various kinds of countermeasures to external snooping/evesdropping.

the executive was replaced (with somebody that had long engineer career) before the project was finalized (who decided to stay at the main plant stie). For instance, the planned executive private driveway, executive private parking lot, and executive private entrance were canceled (before work on them had started).

I was then providing some help/services to los gatos lab ... and they provided me several offices in the now vacant "executive" wing (along with jokes about apologizing for the carpet) ... as well as some amount of labs and other work area. This included room for HSDT TDMA earth station ... with 4.5m dish in the back parking lot. misc. past posts mentioning. misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

in the aftermath of the troubles of the early 90s ... there was an attempt to sell-off the los gatos lab ... but eventually it was bulldozed and the setting turned into a housing development.

misc. recent posts mentioning los gatos lab.
https://www.garlic.com/~lynn/2010c.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#58 watches
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010d.html#21 Credit card data security: Who's responsible?
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010f.html#27 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2010f.html#61 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010f.html#83 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010h.html#76 Software that breaks computer hardware( was:IBM 029 service manual )
https://www.garlic.com/~lynn/2010k.html#12 taking down the machine - z9 series

--
virtualization experience starting Jan1968, online at home since Mar1970

Did a mainframe glitch trigger DBS Bank outage?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did a mainframe glitch trigger DBS Bank outage?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 13 Jul 2010 12:21:21 -0400
crashlurks@GMAIL.COM (Chris Craddock) writes:
"just when you think you've created a fool-proof system, the universe will deliver you a superior class of fool"

human error (both configuration goofs and operational errors) is THE overwhelming cause of system problems these days. Add in application bugs and you've pretty much covered the field. Even the squatty boxes rarely if ever fail these days. People on the other hand...


a few years ago ... we had some dealings with one of the large financial networks. they had attributed their 100% availability for an extended number of years to:

• IMS hot-standby (triple replicated at geographic distance)
automated operator

I recently mentioned that my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture and had done Peer-Coupled Shared Data architecture ... other past posts
https://www.garlic.com/~lynn/submain.html#shareddata

but didn't remain very long in the position because there was little uptake except for IMS hot-standby (until sysplex).

with significant improvements in basic hardware ... environmental conditions (like natural disasters) and human mistakes were starting to dominate failure modes (& outages).

in the early 80s, Jim had done study of system failure modes ... and outages from other than hardware failures were already starting to dominate the statistics. scan of the overview foils:
https://www.garlic.com/~lynn/grayft84.pdf

some recent posts mentioning above:
https://www.garlic.com/~lynn/2009.html#39 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2009.html#65 The 25 Most Dangerous Programming Errors
https://www.garlic.com/~lynn/2009p.html#0 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009q.html#26 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2009q.html#28 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2010f.html#68 But... that's *impossible*

when we were out marketing our HA/CMP product
https://www.garlic.com/~lynn/subtopic.html#hacmp

... I had coined the terms geographic survivability and disaster survivability
https://www.garlic.com/~lynn/submain.html#available

and was (also) asked to write a section for the corporate continuous availability strategy document. However, the section got pulled because both Rochester & POK complained (at the time, they didn't have any geographic survivability strategy).

for other topic drift, reference to Jim and I being keynotes at NASA dependable computing workshop:
http://www.hdcc.cs.cmu.edu/may01/index.html
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Does an 'operator error' counts as a 'glitch?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does an 'operator error' counts as a 'glitch?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 13 Jul 2010 15:30:54 -0400
zedgarhoover@GMAIL.COM (zMan) writes:
OK, this is topic drift, but: are you saying that having stringent password requirements is a failure? Because I sure think it is -- it just encourages folks to use patterns or otherwise weak passwords and/or to write them down anyway.

I use a site that requires 8-byte passwords, changed every n days, with no more than 3 characters from the previous password in a row and at least one digit,, which can't be leading or trailing. Surprise, we use ABCnnDEF, where the nn is what changes. Fortunately this isn't an important site, so I'm not worried about someone getting at it, but it's an example where the stupid restrictions fail.


re:
https://www.garlic.com/~lynn/2010l.html#4 Did a mainframe glitch trigger DBS Bank outage?

recent mention of old password rules (had been sent to me by somebody in POK):
https://www.garlic.com/~lynn/2010k.html#49 GML

reproduced in these old posts:
https://www.garlic.com/~lynn/2001d.html#52 A beautiful morning in AFM
https://www.garlic.com/~lynn/2001d.html#53 April Fools Day

from 3-factor authentication paradigm ... lots of past posts:
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you havesomething you knowsomething you are

40yrs ago ... with a few something-you-know shared-secrets ... things weren't too bad ... but roll forward forty years ... and the paradigm effectively collapses with each person potentially having to memorize hundreds of different (hard to memorize/guess) pins/passwords ... misc. past posts regarding something-you-know shared-secret authentication
https://www.garlic.com/~lynn/subintegrity.html#secrets

from kindergarten security 101, each unique security domain requires a unique (something-you-know) "shared scret" as a countermeasure to cross-domain attacks (aka local garage ISP operation being able to attack critical commercial business).

static PIN/passwords are also vulnerable to various kinds evesdropping vulnerabilities ... and the value can then be used in replay attacks ... contributing to the requirements for frequent changes. frequent changes are also countermeasure to brute force & guessing attacks (especially "weak" passwords).

in any case, proliferation of shared-secret (something you know) authentication overwhelms human capacity to deal with the ever increasing numbers (and the different unique security domains still acting as if they are the only one in the whole world that has a pin/password required to be memorized).

disclaimer ... bunch of patents in the area (assigned and which we have no rights/interests):
https://www.garlic.com/~lynn/aadssummary.htm

--
virtualization experience starting Jan1968, online at home since Mar1970

Z/OS 31bit or 64bit

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Z/OS 31bit or 64bit
Newsgroups: bit.listserv.ibm-main
Date: 13 Jul 2010 14:24:09 -0700
m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
Bi-modal? You mean like MVS/XA? Or do you mean that you could run it in ESA mode? From the 1.6 announcement:

"z/OS V1.6 must execute in a z/Architecture (64-bit) mode."


above/below line was originally introduced with >16mbyte real storage for 3033 ... 64mbyte option or "26bit" real addressing.

all instructions were still 24bit (16mbyte) ... but a hack to the pagetables allowed pages to be located in real storage above 16mbytes. 24bit virtual addresses were run thru address translation ... and the pagetable hack could result in 14bit real page number.

original 370 (4k page option) was 12bit virtual page number into pagetables with 12bit real page number out. page table entry was 16bits; 12bit real page number, 2 defined bits, and 2 undefined bits. the 3033 64mbyte hack ... retasked the undefined bits as 2bit prefix to the standard 12bit real page number ... for 14bit real (4kbyte) page number (up to 64mbytes real). 370 IDAL was also hacked to support 26bit i/o address (enabling direct page in/out above 16mbyte line).

however ... all sorts of stuff still had to be in the first 16mbytes real storage (introducing above/below line issues).

misc. recent posts mentioning >16mbyte support originally for 3033:
https://www.garlic.com/~lynn/2010.html#84 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010g.html#36 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the death of tape

for moving stuff between above & below the 16mbyte line ... there was originally plan to do page out and then page back in (running page move thru i/o system courtesy of IDAL). I provided a little example code where pagetable entries were fiddled for above & below the line addresses ... with a MVCL for move across the line (lot more efficient than doing it thru the page i/o system). some old email providing the MVCL hack (for crossing 16mbyte line, also mentions that 3081 & XA schedule was slipping so additional storage was extending life/revenue for 3033):
https://www.garlic.com/~lynn/2006t.html#email800121
in this post
https://www.garlic.com/~lynn/2006t.html#15 more than 16mbyte support for 370

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Tue, 13 Jul 2010 18:20:24 -0400
Eric Chomko <pne.chomko@comcast.net> writes:
I believe that Interdata and Varian, both minicomputer makers, used the term.

I have vaque memory of cp67 meeting in silicon valley sometime in '69 ... I was still at the univ and was sent to the meeting ... and there were local people from Lockheed and I believe Varian. Later some of the Varian people show up in high level positions at LSI Logic using vm370.

of course, I've frequently repeated the story about (initially) using Interdata/3 (also in 1969) as basis for building a mainframe (clone) controller (later was combination of interdata/4 dedicated to channel interface and interdata/3s for line-scanner function)
https://www.garlic.com/~lynn/submain.html#360pcm

misc. interdata references
http://www.computermuseum.li/Testpage/Interdata-Model-3-1967.htm
http://archive.computerhistory.org/resources/text/Interdata/Interdata.4.1969.102646126.pdf
http://users.speakeasy.net/~johnsonds/id32difs.html
https://en.wikipedia.org/wiki/Interdata_7/32_and_8/32

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Tue, 13 Jul 2010 18:32:07 -0400
Gene Wirchenko <genew@ocis.net> writes:
So did CP/M.

re:
https://www.garlic.com/~lynn/2010l.html#7 Idiotic programming style edicts

old reference even cp/m name coming from cp/67 (author having earlier worked on cp/67 at npg in the early 70s):
https://www.garlic.com/~lynn/2004e.html#38 [REALLY OT!] Overuse of symbolic constants
https://www.garlic.com/~lynn/2004h.html#40 Which Monitor Would You Pick??????
https://www.garlic.com/~lynn/2006.html#48 Early microcomputer (esp i8008) software
https://www.garlic.com/~lynn/2007d.html#41 Is computer history taugh now?
https://www.garlic.com/~lynn/2009e.html#32 Gone but not forgotten: 10 operating systems the world left behind
https://www.garlic.com/~lynn/2009j.html#78 Gone but not forgotten: 10 operating systems the world left behind

old kildall threads
https://www.garlic.com/~lynn/2001b.html#52 Kildall "flying" (was Re: First OS?)
https://www.garlic.com/~lynn/2001b.html#53 Kildall "flying" (was Re: First OS?)
https://www.garlic.com/~lynn/2001b.html#60 monterey's place in computing was: Kildall "flying" (was Re: First OS?)

this reference had gone 404 ... but still lives on at the wayback machine:
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Wed, 14 Jul 2010 09:51:44 -0400
jmfbahciv <See.above@aol.com> writes:
That will never happen with apps because those programmers have no idea what lies underneath the compiler. It will be the monitor which manages scarce resource assignments; scarce resource will also include all CPUs inside plus comm and the video card.

re:
https://www.garlic.com/~lynn/2010k.html#68 Idiotic programming style edicts

and they tend to have very serial thot & programming process.

holy grail for long time has been to come up with programming paradigms that allow for asynchronous/concurrent operation w/o being dependent on the programmers having to think in other than sequential/serialized.

Blade, Intel Push Parallel Programming, Software's Holy Grail
http://www.networkcomputing.com/data-center/blade-intel-push-parallel-programming-so/229503067
Parallel computing
https://en.wikipedia.org/wiki/Parallel_computing

SMP kernels typically allowed for concurrent operation of independent operations (say DBMS or timesharing).

large DBMS implementations had typically required supporting "multi-threaded" operation (i.e. application level asynchronous/concurrent operations). one of the big steps forward in doing this efficiently was charlie's invention of compare&swap instruction. lots of past posts mentioning SMP and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

also misc. past posts about working on 5-way smp implementation in the mid-70s (that got canceled w/o announce).
https://www.garlic.com/~lynn/submain.html#bounce

which then had a follow-up effort for 16-way smp ... that was going great guns until somebody let drop to head of mainframe/POK land ... that it might be decades before their favorite son operating system would have 16-way smp support.

current desktop issue is the large monolithic applications which grew up in the era of single main processor (that would get faster every year). that has changed ... and having a desktop with dozens of processors requires complete restructuring of application programming to achieve higher thruput (some flavor of multi-threaded that is done by large DBMS implementation).

misc. past posts/threads:
https://www.garlic.com/~lynn/2004c.html#20 Parallel programming again (Re: Intel announces "CT" aka
https://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#5 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#14 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#19 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#22 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#29 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#37 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#39 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#49 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#52 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#53 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#54 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#59 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#61 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#1 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#3 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#6 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#25 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#28 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#38 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#39 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007p.html#55 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007q.html#21 Horrid thought about Politics, President Bush, and Democrats
https://www.garlic.com/~lynn/2007v.html#7 Faster Chips Are Leaving Programmers in Their Dust
https://www.garlic.com/~lynn/2008d.html#90 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008f.html#48 Wintel, Universities Team On Parallel Programming
https://www.garlic.com/~lynn/2008i.html#44 Are multicore processors driving application developers to explore multithreaded programming options?
https://www.garlic.com/~lynn/2008k.html#63 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008k.html#72 Transactional Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Titles for the Class of 1978

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 July, 2010
Subject: Titles for the Class of 1978
Blog: Order of Knights of VM
well ... lots of development on cp67/cms ... in the initial morph from cp67 to vm370 ... lots of stuff was dropped and/or simplified which then had to be added back in.

these old email mentions some of the stuff:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

part of the issue was a lot of the company (including the vm370 development group) was distracted by future system effort (which was going to completely replace 370 and be as different from 370 as 360 had been from the previous generation). I continued to do 360/370 stuff during the future system period (as well as making less than complimentary comments about how possible the blue sky stuff was, which was not particularly career enhancing). When future system finally bit the dust ... there was mad rush to get stuff back into the 370 product pipelines (both hardware & software) ... which contributed to overcoming NIH and releasing stuff I had been doing. misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

Some of the stuff that was dropped ... and I provided early on for vm370 ... was all the interrupt & dispatch fastpath stuff that i had earlier done as undergraduate for cp67 (and shipped in cp67) ... which went out in VM370 release 1 PLC9. Of course, that was somewhat superseded by the vm370 performance microcode assists for the 158 & 168.

Somewhat superset was then ECPS done for virgil/tully (138/148) ... old reference about getting asked by Endicott to come up with ECPS
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

Things got a little complicated for a couple qrtrs ... because almost exact same time I had also been asked to come up with 5-way SMP design/support. Both the 5-way SMP support and the ECPS support could also be considered part of the mad rush in the aftermath of the future system demise. some past references to 5-way SMP effort
https://www.garlic.com/~lynn/submain.html#bounce

For other topic drift ... HSDT effort
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and working with NSF on stuff leading up to NSFNET backbone T1 RFP (i.e. NSFNET backbone is considered operational precursor to modern internet) ... some old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

but when the NSFNET T1 RFP was released, internal politics prevented being able to bid on the RFP. Head of NSF attempted to help the situation by writing a letter to the corporation (what we already had running was at least five years ahead of all bid submissions to build *new* NSFNET T1 backbone), copying the CEO. However, that just aggravated the internal politics.

There was vm370 TCP/IP product support implemented in vs/pascal ... which had extremely poor execution characteristics ... in part the standard hardware box was a bridge (rather than router). I did the RFC1044 support changes that went out in the product ... and in some tuning tests at Cray Research between Cray and 4341 ... I got 4341 channel thruput (about factor of 500 times improvement/reduction in numbers of instructions executed per byte moved). misc. past posts mentioning rfc 1044 support:
https://www.garlic.com/~lynn/subnetwork.html#1044

about the same time ... I had some stuff in another vendor's booth at interop '88 ... some past posts mentioning interop 88
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
virtualization experience starting Jan1968, online at home since Mar1970

Titles for the Class of 1978

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 July, 2010
Subject: Titles for the Class of 1978
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#10 Titles for the Class of 1978

one of the things I had been doing at the science center was supported a highly enhanced cp67 for internal distribution. that started to drop off as more & more of the internal datacenters that I supported upgraded to 370s and moved to vm370 ... like HONE ... (eventually) world-wide sales & marketing support system ... some past posts
https://www.garlic.com/~lynn/subtopic.html#hone

after I had moved lots of the changes from cp67 to vm370 ... I then got a lot of internal datacenters for "csc/vm" ... mentioned in these old emails:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

now, from Melinda's history ... there is a little of the sense of some of the people from CTSS went to multics on 5th flr of 545 tech sq ... and others went to the science center on 4th flr of 545 tech sq ... and did virtual machines, cp40, cp67, and some number of other things ... misc. past posts mentioning 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

so there was possibly a little rivalry between the efforts on the two flrs, a little of that shows up in some stories about MIT having both cp67 and multics running in same datacenter
http://www.multicians.org/thvv/360-67.html

To better place the two efforts on somewhat level playing field during mid & late 70s ... one of the comparisons was the total number of all installations that ever ran Multics ... some listed here
http://www.multicians.org/sites.html

was about the same number as peak number of internal installations running csc/vm

one thing that Multics was able to come back with was having been the first platform to ship a RDBMS product (1976).
http://www.mcjones.org/System_R/mrds.html

while the original relational/sql implementation was done on 370/145 vm370 system in bldg. 28 ... some past posts mentioning work on system/r
https://www.garlic.com/~lynn/submain.html#systemr

... there wasn't a product shipped until after the system/r technology transfer to endicott for sql/ds in the 80s

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Wed, 14 Jul 2010 14:26:41 -0400
Eric Chomko <pne.chomko@comcast.net> writes:
The big problem with graphics back in the 60s-70s was the cost of the terminals. I am about to put a Tektronix 4105 less the KB on eBay. It cost a fortune back in the day (early 80s).

Heck even when Apple created hi-res graphics that anyone could afford, we didn't have a real GUI until several years later.


an "inexpensive" graphics terminal was the 3277GA ... tektronix head spliced into the side of 3277 terminal; aka "inexpensive" compared to 2250s (& later 3250s). somebody's archive of old a.f.c. from 2000
http://neil.franklin.ch/Usenet/alt.folklore.computers/20001003_Tektronics_Storage_Tube_Terminals

and a research card/modification for the 3277GA
http://priorartdatabase.com/IPCOM/000050942

from above:
The Research Graphics Interface (REGI) card permits the attachment of Tektronix 4015 and 4013 graphics terminals to the IBM 3277 Graphics Attachment RPQ (GA). With the card installed, the Tektronix terminals may be used as either a Tektronix 618/619 substitute or a fully functional 4015/ 4013 terminal.

... snip ...

a 2250m1 was a 360 "direct" channel attached box (lots of electronics for attaching to 360 channel). for about the same price, there was a 2250m4 ... which was a 2250 sold with a 1130 computer.

a "2250m4" (aka with 1130):
http://www.columbia.edu/cu/computinghistory/2250.html
http://www.ibm1130.net/functional/DisplayUnit.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Old EMAIL Index

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index

from dec79 is the SHARE LSRAD report ... image of the cover here
https://www.garlic.com/~lynn/lsradcover.jpg

I've scanned the whole document and for past couple yrs have been trying to get somebody at SHARE to authorize putting it up on bitsavers. bitsavers has lots of IBM pubs
http://www.bitsavers.org/pdf/ibm/

and even some number of SHARE documents
http://www.bitsavers.org/pdf/ibm/share/

part of the issue is that the copyright law was changed prior to LSRAD publication, extending the copyright period (otherwise the copyright on LSRAD report would have expired and no longer be a problem).

some 30 yr old email reference:
https://www.garlic.com/~lynn/2010g.html#email800710
https://www.garlic.com/~lynn/2006v.html#email800717

the 30yr old 7/17/80 email is related to explosion in 4341 vm370
https://www.garlic.com/~lynn/lhwemail.html#4341

Part of trying to sell MVS on 4341 possibly contributed to the introduction of 3375 ... aka CKD simulation ontop of 3370 FBA device. 3370/3375 were considered "mid-range" dasd ... complimenting 4341 midrange. Since MVS didn't have FBA support ... the alternative was stepping up to 3380s (or using older 3350s). recent thread on the subject in ibm-main mailing list
https://www.garlic.com/~lynn/2010k.html#10 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010k.html#17 Documenting the underlying FBA design of 3375, 3380 and 3390?

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Thu, 15 Jul 2010 11:10:11 -0400
re:
https://www.garlic.com/~lynn/2010l.html#9 Age

besides shared memory, parallelization can be clustered ... using various kinds of message passing technologies. some amount of massive scale-up can even be combinations of shared memory boxes tied together in large clusters ... grid computing and the latest "cloud" computing is along those lines.

some old email regarding cluster scale-up from the early 90s
https://www.garlic.com/~lynn/lhwemail.html#medusa

recent pst in mainframe thread
https://www.garlic.com/~lynn/2010l.html#4 Did a mainframe glitch trigger DBS Bank outage

which mentions jim's overview foils from the early 80s
https://www.garlic.com/~lynn/grayft84.pdf

at the tribute/celebration for jim ... there was reference to Jim's work on transaction formalization ... contributed significantly to financial dataprocessing ... since transaction semantics increased the auditor's comfort regarding correctness of the information.
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guyes Do FInish First
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guyes Do FInish First
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing

In the 90s there was billions of dollars spent re-engineering various high-end financial processing systems. These had grown up as (cobol, sequential) batch operations that started out in the 60s. In the 70s & 80s, there were some number of "real-time" frontends added ... which would generate transactions (like at ATM cash machines) for final processing by the (cobol) batch in the overnight batch window.

In the 90s, with increasing workloads and globalization there was lots of pressure being placed on the overnight batch window processing (more work needed to be done and window being shortened). The re-engineering was to implement straight-through processing (transaction to final completion eliminating need to finish in an overnight batch operation) done in parallel with massive numbers of "killer micros". Part of the problem was there was no attention to speeds&feeds ... and it was when production deployments were in progress that they realized that the technologies used introduced factors of 100 times more overhead (compared to cobol batch) ... totally swamping any anticipated thruput benefits from massive number of parallel "killer micros". This is (at least one of the) cluster parallel analogies to parallel SMP difficulties.

However, there has been lots of work put into DBMS technologies ... not only for high-thruput parallel SMP operation ... but also for high-thruput parallel cluster operation ... some refererenced in this post about jan92 meeting
https://www.garlic.com/~lynn/95.html#13

and more recent posts
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2010j.html#49 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010k.html#3 Assembler programs was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010k.html#54 Unix systems and Serialization mechanism

One of the scenarios is leveraging the "transaction" paradigm as mechanism for breaking up operations into fine-grain work units that can be efficiently parallelized by DBMS systems. A few years ago, I was involved in a demonstration of some technology that took high level financial business specifications and translated them into fine-grain SQL transactions ... which then got massive thruput on clusters of SMP boxes (type of work attempted in the 90s for straight-through processing) with nominal increase in overhead (compared to cobol batch).

some number of past posts mentioning distributed lock manager (as part of dbms cluster scale-up thruput):
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006j.html#20 virtual memory
https://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
https://www.garlic.com/~lynn/2007c.html#42 Keep VM 24X7 365 days
https://www.garlic.com/~lynn/2007i.html#61 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
https://www.garlic.com/~lynn/2007n.html#49 VLIW pre-history
https://www.garlic.com/~lynn/2007q.html#33 Google And IBM Take Aim At Shortage Of Distributed Computing Skills
https://www.garlic.com/~lynn/2007s.html#46 "Server" processors for numbercrunching?
https://www.garlic.com/~lynn/2007v.html#42 Newbie question about db normalization theory: redundant keys OK?
https://www.garlic.com/~lynn/2007v.html#43 distributed lock manager
https://www.garlic.com/~lynn/2007v.html#47 MTS memories
https://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008d.html#25 Remembering The Search For Jim Gray, A Year Later
https://www.garlic.com/~lynn/2008d.html#70 Time to rewrite DBMS, says Ingres founder
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#18 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008k.html#63 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008r.html#71 Curiousity: largest parallel sysplex around?
https://www.garlic.com/~lynn/2009.html#3 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009b.html#40 "Larrabee" GPU design question
https://www.garlic.com/~lynn/2009h.html#26 Natural keys vs Aritficial Keys
https://www.garlic.com/~lynn/2009k.html#36 Ingres claims massive database performance boost
https://www.garlic.com/~lynn/2009k.html#67 Disksize history question
https://www.garlic.com/~lynn/2009m.html#39 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009m.html#84 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2010b.html#32 Happy DEC-10 Day

some of past posts discussing the overnight batch window and straight-through processing failures in the 90s
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object
https://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007u.html#19 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#37 folklore indeed
https://www.garlic.com/~lynn/2007u.html#44 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#61 folklore indeed
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#27 folklore indeed
https://www.garlic.com/~lynn/2007v.html#64 folklore indeed
https://www.garlic.com/~lynn/2007v.html#69 Controlling COBOL DDs named SYSOUT
https://www.garlic.com/~lynn/2007v.html#72 whats the world going to do when all the baby boomers retire
https://www.garlic.com/~lynn/2007v.html#81 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#56 Long running Batch programs keep IMS databases offline
https://www.garlic.com/~lynn/2008p.html#26 What is the biggest IT myth of all time?
https://www.garlic.com/~lynn/2008p.html#30 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technolgies?
https://www.garlic.com/~lynn/2008r.html#7 If you had a massively parallel computing architecture, what unsolved problem would you set out to solve?
https://www.garlic.com/~lynn/2009.html#87 Cleaning Up Spaghetti Code vs. Getting Rid of It
https://www.garlic.com/~lynn/2009c.html#43 Business process re-engineering
https://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
https://www.garlic.com/~lynn/2009h.html#2 z/Journal Does it Again
https://www.garlic.com/~lynn/2009i.html#21 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#23 Why are z/OS people reluctant to use z/OS UNIX? (Are settlements a good argument for overnight batch COBOL ?)
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#30 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#38 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#43 Why are z/OS people reluctant to use z/OS UNIX? (Are settlements a good argument for overnight batch COBOL ?)
https://www.garlic.com/~lynn/2009i.html#60 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009l.html#57 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009n.html#13 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009o.html#81 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009p.html#57 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009q.html#67 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009r.html#35 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009r.html#47 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010b.html#19 STEM crisis
https://www.garlic.com/~lynn/2010e.html#77 Madoff Whistleblower Book
https://www.garlic.com/~lynn/2010f.html#56 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010h.html#78 Software that breaks computer hardware( was:IBM 029 service manual )
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Thu, 15 Jul 2010 14:31:49 -0400
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
This wasn't limited to IBM. Univac salesmen were at least as bad for lowballing requirements - and then we technical types had to try to make it work anyway.

re:
https://www.garlic.com/~lynn/2010l.html#9 Age
https://www.garlic.com/~lynn/2010l.html#14 Age

the science center did a lot of different kinds of work on thruput & performance ... including stuff that would evolve into things like capacity planning. some of it was algorithm work, other was lots & lots of statistical gathering, as well as various kinds of modeling work. ... misc. past posts mentioning science center:
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

one of the areas was an system analytical model implemented in cms\apl ... which migrated to apl\cms and the various others as newer versions came out. a flavor of the implementation was packaged and made available on the world-wide sales&marketing support (online virtual machine based) HONE system ... some past posts
https://www.garlic.com/~lynn/subtopic.html#hone

as the performance predictor. sales/marketing could input information about customer workload & configuration and then do "what-if" questions about what happens with changes to workload &/or configuration (type of thing that is frequently currently done using some sort of spreadsheet based implementation ... in-fact ... back then, there was a lot of stuff done in APL ... currently done using spreadhsheets).

in the late cp67 period and transition to vm370 at the science center ... and creation of csc/vm ... some old email refs:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

I had done a lot of work on automated benchmarking and being able to specify workloads (using a synthentic workload generator, using profile information from large number of real systems) and configuration parameters. some past posts
https://www.garlic.com/~lynn/submain.html#bench

in the base product morph from cp67 to vm370 ... a lot of stuff had been dropped and/or simplified. also in the "future system" periods ... large parts of the corporation got caught up (including the vm370 product development group) ... and 370 hardware & software product pipelines had been allowed to go dry (including because the party line was that the radically different "future system" would replace/obsolete 370).
https://www.garlic.com/~lynn/submain.html#futuresys

I had fairly jaundice view of a lot of the "future system" as a lot of fluff with little or no substance behind it ... and continued to do 360/370 work during the "future system" period. Then with the demise of "future system" ... there was a mad rush to get stuff back into the 370 hardware/software product pipelines ... which contributed to overcoming NIH and decide to pickup and ship some amount of stuff that I had been doing. There were also various calls from the user groups, like SHARE ... to have vm370 "upgraded" with a bunch of stuff I had done and had shipped in cp67.

Part of that effort was to package up some of the stuff I had been doing and release it as a separately priced kernel component. In the wake of the 23jun69 unbundling announcement, the corporation had managed to make the case (with the gov.) that kernel software should still be free. However, part of the "future system" distraction contributed to allowing mainframe clone processors to get a market foothold. My kernel "resource manager" was then selected to be the initial guinea pig for separately priced kernel components (eventually leading to making the whole kernel priced ... followed by stop shipping source ... and the "OCO-wars"). misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

In any case, on final pass getting ready to ship my resource manager ... I did 2000 automated benchmarks that took three months elapsed time. The first 1000 or so had been manually selected in large part based on large amounts of live system data from large number of different systems; aka sort of do a graph with multiple axis for various different resources and activities ... then do a scatter-plot from the large of data from large number of different systems ... then select benchmarks along the extreme perimeter of observed values, numerous benchmarks outside all observed values ... and large number of randomly selected benchmarks from within the observed operating envelops.

Along the way, a modified version of the performance predictor would predict how the system should perform before the benchmark started and then compare it with actual benchmark results. The final 1000 benchmarks were generated by the modified performance predictor ... based on benchmarks run to-date ... looking for things like possible anomolous operating points.

misc. past posts mentioning performance predictor:
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002q.html#28 Origin of XAUTOLOG (x-post)
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003p.html#29 Sun researchers: Computers do bad math ;)
https://www.garlic.com/~lynn/2004g.html#42 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
https://www.garlic.com/~lynn/2004k.html#31 capacity planning: art, science or magic?
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2005d.html#1 Self restarting property of RTOS-How it works?
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#12 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005o.html#30 auto reIPL
https://www.garlic.com/~lynn/2005o.html#34 Not enough parallelism in programming
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006f.html#22 A very basic question
https://www.garlic.com/~lynn/2006f.html#30 A very basic question
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#3 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#25 CPU usage for paging
https://www.garlic.com/~lynn/2006s.html#24 Curiousity: CPU % for COBOL program
https://www.garlic.com/~lynn/2006t.html#28 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007r.html#68 High order bit in 31/24 bit address
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008m.html#42 APL
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009h.html#76 A Math Geek's Plan to Save Wall Street's Soul
https://www.garlic.com/~lynn/2009l.html#43 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009r.html#17 How to reduce the overall monthly cost on a System z environment?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2010j.html#81 Percentage of code executed that is user written was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010k.html#8 Idiotic programming style edicts

--
virtualization experience starting Jan1968, online at home since Mar1970

History--automated payroll processing by other than a computer?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History--automated payroll processing by other than a computer?
Newsgroups: alt.folklore.computers
Date: Thu, 15 Jul 2010 16:29:41 -0400
re:
https://www.garlic.com/~lynn/2010k.html#58 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010k.html#63 History--automated payroll processing by other than a computer?

running commentary on tv business program just now made statements that the passage of the financial reform bill reaffirms congress's role as corrupt institution ... given the enormous amounts of money that the financial sector poured into congress and that the reform bill didn't address any of the important issues that resulted in the recent financial disaster ... which pretty much assures that in a few years, the recent financial meltdown will repeat.

--
virtualization experience starting Jan1968, online at home since Mar1970

History--automated payroll processing by other than a computer?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History--automated payroll processing by other than a computer?
Newsgroups: alt.folklore.computers
Date: Fri, 16 Jul 2010 09:06:00 -0400
jmfbahciv <See.above@aol.com> writes:
My question is how long will this cycle of corruption take before it collapses? First the 1000 people, which Lynn writes about, have to identify and then build their infrastructure around the loopholes. Then it takes a number of years of skimming the money pots before the rest of the traders notice it. Then they do their own skimming until the bubble bursts. Another question is are the cycle times becoming shorter. It takes about 5 years for the original skimmers to establish and collect.

re:
https://www.garlic.com/~lynn/2010k.html#58 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010k.html#63 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010l.html#16 History--automated payroll processing by other than a computer?

commentary was that legislation did nothing to address personal responsibility of the people involved (being able to take extremely risky positions, pocket the profits and institution/country/economy absorb the losses); nothing about packaging toxic CDOs and representing them as something else; nothing about being able to package the toxic CDOs to fail, selling them as something else, and then taking bets that they would fail (example used was hiring company to fix wiring fire hazard in your house and they take out fire insurance on your house when they are done ... aka betting that it will burn down); nothing about breaking up the too-big-to-fail institutions ... especially going back to having the safety and security of regulated depository institutions separated from risky, unregulated investment banking operations.

here is recent item on the subject:
http://baselinescenario.com/2010/07/15/tim-geithner%E2%80%99s-ninth-political-life/

above mentions E. Warren might be candidate for one of the regulators ... some past refs to E. Warren
https://www.garlic.com/~lynn/2010c.html#23 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#26 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010g.html#18 The 2010 Census
https://www.garlic.com/~lynn/2010g.html#62 The 2010 Census

... recent posts with series of news articles snips about chairperson of Commodity Futures Trading Commission being replaced with somebody more sympathetic.
https://www.garlic.com/~lynn/2010f.html#54 The 2010 Census
https://www.garlic.com/~lynn/2010h.html#28 Our Pecora Moment
https://www.garlic.com/~lynn/2010h.html#67 The Python and the Mongoose: it helps if you know the rules of engagement

few recent references to Sarbanes-Oxley adding more authority to SEC to prevent future Enrons ... and GAO looking at some of it:
https://www.garlic.com/~lynn/2010.html#36 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#81 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#33 The 2010 Census
https://www.garlic.com/~lynn/2010h.html#15 The Revolving Door and S.E.C. Enforcement
https://www.garlic.com/~lynn/2010h.html#16 The Revolving Door and S.E.C. Enforcement
https://www.garlic.com/~lynn/2010h.html#67 The Python and the Mongoose: it helps if you know the rules of engagement
https://www.garlic.com/~lynn/2010i.html#84 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#46 Snow White and the Seven Dwarfs

--
virtualization experience starting Jan1968, online at home since Mar1970

Old EMAIL Index

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index

from Melinda's history (bottom pg. 65):
http://www.leeandmelindavarian.com/Melinda/25paper.listing
ADSM had its origins in about 1983 as a program called CMSBACK, written by two VM support people at IBM's Almaden Research, Rob Rees and Michael Penner. CMSBACK allowed CMS users to back up and restore their own files. It quite naturally became the basis a few years later for a program to allow workstation users to back up and restore their files, which was announced as the WDSF/VM product in 1990. By that time, VM Development in Endicott had gotten involved, doing much of the work required to bring the new product to market.

... snip ...

CMSBACK 1983 was something like version 3 or 4 ... of what was installed/distributed internally ... old email refs ... starting back in the late 70s:
https://www.garlic.com/~lynn/lhwemail.html#cmsback

version 1 ... started out installed at SJR and HONE in the late 70s ... i.e. I had hobby of distributing and supporting highly enhanced systems internally. One of those hobbies was the world-wide sales & marketing support (virtual machine based) HONE system (dating back to when it first started on cp67) ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#hone

there was joke in the period that I worked four shift weeks ... 1st shift at SJR/bldg28, 2nd shift at disk engineering (bldg 14&15), 3rd shift in santa teresa (bldg. 90), and 4th shift up valley at HONE. HONE had consolidated its various US datacenters in silicon valley in mid-70s (sometimes I would ride-share with somebody that worked at HONE but lived in almaden valley).

misc. past posts mentioning getting to play disk engineer in bldg. 14/15
https://www.garlic.com/~lynn/subtopic.html#disk

misc. past posts mentioning various things related to original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

a couple old emails related to Jim palming off some number of things when he left for Tandem ... including consulting to the IMS group in STL
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

some old HONE related email
https://www.garlic.com/~lynn/lhwemail.html#hone

As an aside ... the person that was assigned to work with me on CMSBACK version 2 ... later left and did some similar products for other vendors. There was period in 80s when Endicott was looking at logo'ing a product (prior to WDSF) ... evaluating CMSBACK against one of the products this person had done for another vendor & chose the outside vendor product.

--
virtualization experience starting Jan1968, online at home since Mar1970

Old EMAIL Index

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#18 Old EMAIL Index

random trivia ... do map search on the facebook bldg.address ... getting satellite image ... the bldg. next door was where HONE datacenter moved to in the mid-70s (that bldg. is someth

additional (long-winded) CMSBACK trivia ...

release 1 CMSBACK used an extensively modified version of VMFPLC for both performance as well as minimizing lost space (interrecord gap) on tape. The tape lost space didn't start out being so important at SJR/bldg.28 ... but it was extremely significant at HONE ... which was starting to push 40,000 defined userids at the time (and large disk farm that needed backing up). misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

now there is little history with VMFPLC. In the aftermath of the demise of "future system" ... some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to get stuff back into the 370 hardware&software product pipelines ... this contributed to overcoming NIH and including small subset of stuff in vm370 release3 that I had been doing (non-ipl way of doing shared segments, hacked lots of cms code so it could reside in r/o shared segment, etc). the other thing that happened was that the favorite son operating system in POK convinced the corporation that in order to make their XA schedule ... the vm370 product had to be killed, the vm370 development group shutdown and all the people moved to POK to support their XA development.

Some number of people didn't leave the area ... and there is joke about head of POK being a major contributor to (DEC) VMS (because the number of people from the vm370 development group that went to work on VMS). Some number of things were lost in the shutdown ... one of the people that went to DEC had done a major enhancement to OS simulation ... which just vanished in the shutdown. Now Endicott managed to save the VM370 product mission ... in part because of the exploding number of VM370 systems in the midrange ... but they effectively had to reconstitute a development group from scratch. So another thing that was "lost" in the burlington mall site shutdown ... was the VMFPLC source.

The only apparent surviving VMFPLC source, was a copy that I had gotten and did some small performance tweaks to ... when running with CMS paged-mapped filesystem (something I had originally done on cp67 and ported to vm370).

misc. past posts mentioning cms paged-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

... other burlington mall trivia ...

the cp67 development group had split off from the science center ... on the 4th flr 545 tech sq ... and moved to the 3rd floor, taking over/absorbing the Boston Programming Center (which had done stuff like CPS). They continued to grow and eventually moved out to Burlington Mall ... taking over the old vacant SBC (service bureau corporation) bldg. Along the way, they eventually were renamed the vm370 development group (once virtual memory for 370 was announced).

As part of killing off the vm370 product and moving all the people to POK (under excuse that otherwise the POK favorite son operating system wouldn't be able to make its XA ship date) ... they were going to delay the shutdown notice until just before people had to move (to minimize the number of people that might be able to find other employment and not move). Unfortunately, somebody leaked the news to Burlington several months early. The last couple months in Burlington were like a morgue ... in part because of the witch hunt trying to find out who leaked the news.

--
virtualization experience starting Jan1968, online at home since Mar1970

Old EMAIL Index

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#18 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#19 Old EMAIL Index

HONE was one of the early csc/vm adopters ... after having moved off a csc cp67 system when vm370 on 370s became available (and then when I moved to sjr ... moved to "sjr/vm" system). old postings mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

some old email about moving stuff from cp67 to vm370 for csc/vm:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

the consolidated HONE systems in silicon valley was also largest single-system-image operation at the time ... some past posts reference
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time

however a major motivation for HONE to be early csc/vm adopter (besides just having lots of enhancements) was the ability to have shared segments w/o needing IPL. HONE had implemented a heavily customized sales/marketing support user interface in APL. The majority of HONE accounts were set up to automatically IPL (by name) a CMS variant that included definition of APL shared-segments.

However, somebody had looked at a number of the extremely compute intensive HONE AIDS and reprogrammed in Fortran ... getting significant CPU reduction. The problem was to get a sales/marketing person out of (ipl-by-name) APLCMS image into a normal CMS image to run the Fortran application and then back into APLCMS image ... w/o having to explain to the sales/marketing force what an "IPL" command was.

Part of the stuff converted from CP67 to VM370 was a bunch of changes to support CMS paged-mapped filesystem and shared segment gorp handled as part of normal CMS program loading. This allowed a normal, automatic CMS "ipl-by-name" ... and then APL shared segment operation to occur as part of loading CMS APL executable from a cms paged-mapped filesystem. It was then trivial to do the implementation to transparently drop out of APL ... execute some Fortran ... and re-enter APL ... totally transparent to the sales/marketing user in the field.

As part of the demise of future system ... and later mad rush to get stuff back into 370 product hardware/software pipeline ... it overcome some amount of NIH and various bits and pieces of my stuff was picked for release. A very small subset of the paged-mapped filesystem stuff was selected for vm370 release 3 ... w/o the actual filesystem changes ... just the enhancements for CMS shared-segments. Instead of using paged-mapped filesys, the "namesys" stuff was used (aka shared-segment definition and image to be loaded had to be a namesys thing), a new interface to CP called DCSS was created (to access namesys stuff w/o IPL command) ... and a lot of the CMS application code changes allowing stuff to execute in R/O shared segment was included.

misc. past posts mentioning CMS paged-mapped filesystem stuff
https://www.garlic.com/~lynn/submain.html#mmap

misc. other past posts mentioning some of the shared-segment issues
https://www.garlic.com/~lynn/submain.html#adcon

... specifically with respect to having location independent execution of read-only shared code.

--
virtualization experience starting Jan1968, online at home since Mar1970

Titles for the Class of 1978

From: lynn@garlic.com (Lynn Wheeler)
Date: 16 July, 2010
Subject: Titles for the Class of 1978
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#10 Titles for the Class of 1978
https://www.garlic.com/~lynn/2010l.html#11 Titles for the Class of 1978

1978 may have been for the resource manager ... a lot of which I had done as undergraduate in the 60s on cp67 ... which was dropped in the morph from cp67 to vm370. SHARE lobbied heavily to allow me to re-release the resource manager stuff ... which eventually happened and went out as separately priced kernel option ... guinea pig for change over to start charging for kernel software.

There was a major problem come release 4 and shipping multiprocessor support. I had included a lot of stuff in the resource manager that was used to build multiprocessor support (but not the actual multiprocessor stuff itself). The initial policy for kernel software charging was direct hardware support would still be free ... but other (new) stuff could charged for. Making the charged-for resource manager a prereq for the free release 4 multiprocessor support ... would effectively be a violation of that policy. The eventually resolution was something like 90% of the lines-of-code from the release 3 resource manager were moved into the free base release 4 kernel ... w/o changing the price charge for the (release 4) resource manager.

Old posts mentioning getting con'ed into doing 5-way multiprocessor support about the same time I was doing other csc/vm stuff as well as ECPS (it was never announced since the hardware got canceled):
https://www.garlic.com/~lynn/submain.html#bounce

--
virtualization experience starting Jan1968, online at home since Mar1970

Old EMAIL Index

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 July, 2010
Subject: Old EMAIL Index
Blog: Order of Knights of VM
re:
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#18 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#19 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#20 Old EMAIL Index

for whatever reason ... I was still responsible person getting these notifications as of the start of 1983.

Date: 01/12/83 15:20:18
To: wheeler@sjrlvm4
From: sysclock@sjrlvm4

SUBJECT: SYSPGMER Support Notification.
CMSBACK not logged on. AUTOLOG-ing now...


... snip ... top of post, old email index

other old CMSBACK email
https://www.garlic.com/~lynn/lhwemail.html#cmsback

and for other random email (I had been making it available online internally, including on the world-wide sales&marketing (virtual machine based) HONE system. misc. old email mentioning HONE
https://www.garlic.com/~lynn/lhwemail.html#hone

Date: 13 January 1983, 13:05:04 EST
From: xxxx in Endicott VM/370 Design
To: Lynn Wheeler

I'm told that you have access to the VMSHARE dialog, and have put it online at San Jose. I'm interested in doing the same thing here in Endicott. Could you please tell me what procedure you have to go through to get a tape from Tymshare, how much disk space needed, and if there are any IBM security measures that have to be taken?? (And anything else I need to know.) Your help is greatly appreciated.


... snip ... top of post, old email index

and response to above (by 1983, I was also making PCSHARE available internally). other old VMSHARE email
https://www.garlic.com/~lynn/lhwemail.html#vmshare

Date: 01/13/83 13:22:17
From: wheeler
To: xxxx in Endicott VM/370 Design

re: vmshare; I load both the VMSHARE & PCSHARE data here (both come on the same tape) and then distribute it around the corporation. VMSHARE is available at 5-8 other locations (ref: $VMSHAR QMARK). PCSHARE is available at 10-15 other locations (ref: $PCSHAR QMARK), including one location in Endicott. The current VMSHARE 291 disk is about 30 3350 cylinders. I can send you the complete data base now, and then put you on the monthly distribution list for the incremental changes.

Procedure for the monthly distribution is only new &/or changed files are shipped. Also a file that erases "old" files is shipped. All VMSHARE files are shipped class V (all pcshare files are shipped class P). The exec file is sent as a single, disk dumped file (VMSHERS EXEC). The VMSHARE 291 disk should be linked R/W and accessed as the B disk. The VMSHERS exec should be loaded onto an A disk and the following command executed: VMSHERS ERASE

After deleted files have been erased, then the VMSHARE 291 disk can be linked as the A-disk and the new/changed files loaded onto the VMSHARE 291 disk. The new/changed files are shipped with several per spool file (spool files are normally in the range of 3500-5000 records). For various purposes, I'm periodically requested to place files other than VMSHARE files on the VMSHARE 291 disk ... especially for SEs accessing the disk on the various HONE machines. Currently the only file on the VMSHARE 291 disk that didn't originate from the Tymshare tape is Melinda Varian's "What your Mother never told you about VM".

FYI: in the past I had sent this information to ZZZZZZ ... but he requested that he be removed from the distribution list because he didn't want it.


... snip ... top of post, old email index, HONE email

"ZZZZZZ" obfuscated to protect the guilty ... a prominent member of Endicott VM370 organization. And moving right along ...

Date: 19 January 1983, 08:06:53 EST
From: xxxx in Endicott VM/370 Design
To: Lynn Wheeler

Just wanted to acknowledge that I have the VMSHARE files in my reader... am waiting for system support to give me a disk to put them on. Thanks.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

OS idling

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS idling
Newsgroups: alt.os.development, alt.folklore.computers
Date: Sat, 17 Jul 2010 22:08:40 -0400
Louis Krupp <lkrupp_nospam@indra.com.invalid> writes:
The OS itself should be idling most of the time; applications are supposed to be using CPU cycles, not the OS. The CPU (if there is only one) itself may be idle, or not.

Once upon a time, when I worked with Burroughs Large Systems (B6700, etc), there were two kinds of idle states: true idle and false idle. True idle happened when there was really very little to do. The B6700 OS (the "Master Control Program") used the register displays to show when the CPU was idling. False idle (which displayed the same pattern) happened when the system was low on physical memory and there was lots of swapping and paging to and from disk and the system spent a significant amount of time waiting for disk I/O to complete. Programs that could have been doing something useful were blocked waiting for memory. This state of affairs was called "thrashing," and back when memory was expensive, it happened a lot. (I remember a Burroughs salesman saying, back in about 1977, when I suggested that a particular mainframe configuration might be a little short on memory for the projected workload, "It's got a million and a half bytes!")


recent discussion about how to have the system in wait when idle ... so that the system meter would stop.
https://www.garlic.com/~lynn/2010j.html#34 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#35 IBM Rational Developer for System z
https://www.garlic.com/~lynn/2010j.html#37 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#25 Was VM ever used as an exokernel?

this was back when most systems were leased and monthly charges were based on the hardware system meter. The hardware system meter ran whenever the cpu was idle/wait and there was no active i/o. the system meter would "coast" for 400mills after the end of all activity before it actually came to a stop (i.e. both cpu and i/o had to be quiet for 400mills before system meter stopped).

as mentioned, issue was important for online commercial time-sharing service bureaus that recovered their cost by use charges ... and were looking at having the system up 7x24. in the beginning, off-shift usage tended to be light ... i.e. and the system would be idle most of the time. the problem was to develop a trick to have an acive i/o waiting for incoming characters .... w/o having the system meter running. misc. past posts mentioning (virtual machine based) commercial time-sharing service bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

at dec81 acm sigops in asilomar ... jim asked me if i could help a friend of his that was doing phd at stanford on page replacement and thrashing controls. there apparently was a lot of opposition because it was contrary to some academic work that had been done in the late 60s on the subject. jim was asking me ... because i had done similar work (to what was in the phd) as an undergraduate in the late 60s ... which was picked up and shipped in cp67.
https://www.garlic.com/~lynn/2006w.html#email821019

for whatever reason, local management blocked letting me send a reply until almost a year later. in the 70s, grenoble scientific center published paper in acm on changes to cp67 corresponding to the academic work in the late 60s. I had comparison numbers that showed cp67 at cambridge scientific center with my modifications, significantly outperformed the grenoble scientific center cp67 system. Furthermore the grenoble scientific center cp67 had 1mbyte 360/67 (154 4k pageable pages after fixed memory requirements) compared to the cambridge scientific center 360/67 with 768k of real storage (104 4k pageable pages after fixed memory requirements). The grenoble system was for 35 users compared to 75-80 users on the cambridge system (workload profiles for the two sets of users were nearly the same).

misc. past post mentioning work on page replacement and thrashing controls
https://www.garlic.com/~lynn/subtopic.html#wsclock

misc. other past posts mentioning dynamic adaptive resource management work (as undergraduate in late 60s) ... sometimes referred to as "fair share scheduling" ... since default resource use policy was "fair share" (sometimes also called "wheeler" scheduler).
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

OS idling

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS idling
Newsgroups: alt.os.development, alt.folklore.computers
Date: Sun, 18 Jul 2010 09:33:32 -0400
re:
https://www.garlic.com/~lynn/2010j.html#34 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#35 IBM Rational Developer for System z
https://www.garlic.com/~lynn/2010j.html#37 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#25 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010l.html#23 OS idling

one of the issues about having independent hardware meter to base lease charges ... was that lots of customers could run their old operating systems and/or could heavily (source/binary) modify some standard operating system ... as such, the reliability of any accounting information (for machine lease charges) might be in question.

there have been all sorts of stories about students fiddling software accounting at univ. in order to bypass some account use billing limit ... imagine taking that to the next level where an organization attempts to limit real money that it has to pay to the vendor (for leased hardware).

there has been something analogous with current mainframe configurations. hardware tends to be shipped with all sorts of additional processors ... customers are allowed to somewhat dynamically enable & disable processors ... for which they are billed. A mainframe configuration can also have some specially designated processors that are supposedly enabled for only running special kinds of software ... where the speciality processors are billed at lower rate (than the standard processors; the speciality processors are normal processors that run highly stripped down/efficient hidden operating system ... and the standard operating system has a custom interface for assigning execution on the specialty operating system; ... or it can be a different kind of operating system, like linux or opensolaris)

In any cases, there has been some recent threats of litigation involving software vendor that is advertising product that enables "non-approved" workloads on the specialty engines (engines that the hardware vendor has lower use charges compared to the "standard" processors/engines; aka would normally be run on processors/engines that have higher use charges). example

Lowering Mainframe TCO Through zIIP Specialty Engine Exploitation
http://www.ibmsystemsmag.com/mainframe/24806p1.aspx Neon Software CEO rejects IBM warnings on mainframe licensing issues due to zPrime
http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1363645,00.html
IBM Takes Legal Aim at Open Software Project. It's About Time
http://industry.bnet.com/technology/10006802/ibm-takes-legal-aim-at-open-software-project-maybe-its-about-time/
Neon sues IBM over 'anticompetitive' mainframe tactics
http://www.theregister.co.uk/2009/12/15/neon_zprime_ibm_lawsuit/
IBM responds to Neon Software mainframe lawsuit
http://itknowledgeexchange.techtarget.com/mainframe-blog/ibm-responds-to-neon-software-mainframe-lawsuit/
Neon updates zPrime mainframe accelerator
http://www.channelregister.co.uk/2010/06/15/neon_zprime_update/
IBM Strikes Back at Neon
http://www.eweek.com/c/a/IT-Infrastructure/IBM-Strikes-Back-at-Neon-Systems-689643/

other references to speciality engines/processors

zIIP and Other Specialty Engines
http://www.ibmsystemsmag.com/mainframe/novemberdecember07/coverstory/18228p1.aspx
Features
http://www-03.ibm.com/systems/z/hardware/features/index.html
IBM System z Integrated Information Processor (zIIP)
http://www-03.ibm.com/systems/z/hardware/features/ziip/index.html
Specialty Engine Support
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/topic/com.ibm.zvm.v54.hcpa7/hcse7b3012.htm
Mainframe specialty engines
http://it.toolbox.com/blogs/mainframe-world/mainframe-specialty-engines-32573
Getting Value from IBM's Specialty Engines
http://www.ca.com/files/newsletters/ca_mf_newsletter_200802_page6.htm
CA IDMS r17 Exploits zIIP Engine to Deliver Greater Capacity
http://www.ca.com/us/press/release.aspx?cid=190318
IBM authorizes OpenSolaris on mainframes
http://www.theregister.co.uk/2008/11/24/ibm_authorizes_mainframe_opensolaris/

an HP view mentioning specialty engines:

The Real Story about the IBM Mainframe Makeover
http://h20338.www2.hp.com/enterprise/us/en/messaging/realstory-ibm-mainframemakeover.html

for other drift, above mentions superdome. in the 90s, HP brought in somebody from the austin RIOS group to run superdome (had previously been in the Kingston supercomputer group and before that at Cray). When they were first starting up ... there was some discussion of running it as an independent business ... with participants getting equity ... and we were asked in early about if we would be interested in participating (turns out that they never went thru with the independent business operation).

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Sun, 18 Jul 2010 09:53:03 -0400
Bill Pechter <pechter@tucker.pechter.dyndns.org> writes:
Hanscom was in Bedford Mass.... not too far from DEC Training. It was split between military and local civilian at some point.

A Field Service Engineer I know was flown from there to her home after spending some time at a hospital up near training.

Her boss, a pilot, flew up in his private plane to give her a quicker trip home. DEC was really like one big disfunction family (at least in Field Service).


re:
https://www.garlic.com/~lynn/2010j.html#21 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#25 Idiotic programming style edicts

Moore Army Air Field (located on Ft Devens):
https://en.wikipedia.org/wiki/Moore_Army_Air_Field

above includes mention hosting c-130s:
https://en.wikipedia.org/wiki/C-130_Hercules

Hanscom was/is air force:
https://en.wikipedia.org/wiki/Hanscom_Air_Force_Base

not a lot of references to anything that would look like c-130.

further out western mass. is westover joint air reserve base
https://en.wikipedia.org/wiki/Westover_Joint_Air_Reserve_Base

--
virtualization experience starting Jan1968, online at home since Mar1970

Root Zone DNSSEC Deployment Technical Status Update

Refed: **, - **, - **, - **, - **, - **, - **
Date: Sun, 18 Jul 2010 10:35:18 -0400
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root Zone DNSSEC Deployment Technical Status Update
Blog: cryptography
On 07/18/2010 07:19 AM, Steven Bellovin wrote:
DNSSEC signatures do not need to have a long lifetime; no one cares if, in 10 years, someone can find a preimage attack against today's signed zones. This is unlike many other uses of digital signatures, where you may have to present evidence in court about what some did or did not sign.

It's also unclear to me what the actual deployment is of stronger algorithms, or of code that will do the right thing if multiple signatures are present.


the PKI industry had gotten into something of chicken&egg situation trying to justify their high infrastructure costs by claiming various sorts of things ... that required even higher infrastructure costs.

we had been called in to help wordsmith the cal. state electronic signature legislation ... which was being heavily lobbied by PKI industry to mandate (PKI) digital signatures. some past posts on the subject
https://www.garlic.com/~lynn/subpubkey.html#signature

and the lawyers dispelled some myths about PKI digital signatures and their correspondence to human signatures and things like non-repudiation .... numerous references in the period that somehow PKI digital signatures might imply human signatures &/or non-repudiation. from the lawyers there was somewhat the sense of cognitive dissonance ... possibly because the terms "human signature" and "digital signature" both contained the word "signature".

the opposite approach (to claiming more & more attributes for PKI digital signatures at ever escalating infrastructure costs) ... is just drop back to asymmetric (public key) cryptography "digital signatures" being used in place of shared-secrets as countermeasure to various kinds of evesdropping and replay attacks (aka replacing requirement for shared-secret as authentication mechanism).

--
virtualization experience starting Jan1968, online at home since Mar1970

OS idling

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS idling
Newsgroups: alt.os.development, alt.folklore.computers
Date: Sun, 18 Jul 2010 13:49:35 -0400
scott@slp53.sl.home (Scott Lurndal) writes:

13.10  IDLE PROCESSOR (IDL)/OP=8B

+----+----+
| OP | AF |
+----+----+

OP = 8B
AF = Unused and reserved.

Function:

The Idle processor instruction causes the processor to change from
EXECUTING mode to IDLE mode.

A processor in IDLE mode is sensitive to ALL condition interrupts (see section 6).

Whenever an idle processor responds to an interrupt, it performs an
Interrupt procedure as defined in section 6.1.

This instruction may only be executed with Privileged Enable set or an
Invalid Instruction fault (IEX=02) is reported.



re:
https://www.garlic.com/~lynn/2010l.html#23 OS idling
https://www.garlic.com/~lynn/2010l.html#24 OS idling

360 had "load psw" instruction ... load program status word ... which had all sorts of flag bits specifying things like wait-state (aka "idle") ... "problem"/"supervisor" state (i.e. whether executing privileged/supervisor state instructions was allowed), instruction address, whether enabled for various i/o interrupts, timer interupts, etc ... conditions.

g&d conversison of green card ios3270 to html
https://www.garlic.com/~lynn/gcard.html

description of various program status word formats
https://www.garlic.com/~lynn/gcard.html#5

360/67 ... and later 370 address-translation (extended) mode, redefined some of the PSW bits ... to "summary" bits ... and moved detailed enable/disable mask bits to "control registers" (like which specific i/o channels were enabled or disable for interrupts).

besides loading new PSW via explicit LPSW instruction ... there were also specific "new" PSW loaded when different kinds of interrupts might occur (i/o, supervisor call, timer, program check, machine check, etc ... along with corresponding places to save the current PSW contents when an interrupt occurred). machine fixed storage locations for old & new interrupt PSWs ... as well as some other fixed storage locations:
https://www.garlic.com/~lynn/gcard.html#4

current load psw instruction definition (64bit, 16 byte PSW expanded from original 360 24bit, 8 byte PSW):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/10.21?DT=20040504121320
(newer) load psw "extended"
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/10.22?DT=20040504121320

early cp67 and vm370 kernels would have large sections of code disabled for interrupts ... and then decide to switch to running from supervisor(kernel)/disabled mode to enabled mode running some application. There was some frequent relatively trivial work involved in deciding to run some application work ... which would be "lost" if there were pending/queued interrupts that would immediately happen with the switch to application (enabled for interrupt) mode.

I added some games (shipped to customers as part of my vm370 "resource manager") to 1) do enable/disable for interrupts ... using SSM instruction (which only changed small subset of PSW associated with interrupts) before going to the effort to switch to running application (which would "drain" queued/pending interrupts) and 2) potentially switch to running applications disabled for some types of interrupts (when interrupts were happening at high rate).

The 2nd case involving asynchronous interrupts can really blow cache hit rates and processor thruputs (related to switching back & forth between kernel interrupt handlers and application execution). It is actually possible to have both higher I/O thruput and application thruput ... by more judiciously controlling when interrupts could happen.

current SSM instruction:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/10.53?DT=20040504121320

chapter discussing kinds & operation of interrupts:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/6.0?DT=20040504121320#HDR06AH1

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hacking -- Fact or Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 19 July, 2010
Subject: Mainframe Hacking -- Fact or Fiction
Blog: MainframeZone
BITNET had the xmas exec
http://vm.marist.edu/~vmshare/browse.cgi?fn=CHRISTMA&ft=PROB
and
http://catless.ncl.ac.uk/Risks/5.81.html#subj1
other references in this archived post
https://www.garlic.com/~lynn/2004p.html#16 Mainframe Virus??

almost exactly a year before the morris worm on the internet
https://en.wikipedia.org/wiki/Morris_worm

bitnet (& earn in europe) was corporate sponsored network (educational institutions) that used technology similar to the internal network (the internal network was larger than the arpanet/internet from just about the beginning until sometime late '85 or possibly early '86).

misc. past posts mentioning bitnet (& earn)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

We were tangentially involved in the cal. state data breach legislation (a decade ago). We had been brought in to help wordsmith the cal. state electronic signature legislation. some past posts
https://www.garlic.com/~lynn/subpubkey.html#signature

several of the participants were also heavily involved in privacy issues. they had done in-depth public surveys and the number one issue was "identity theft" ... and the major form (of identity theft) was account fraud (fraudulent financial transactions against existing accounts) as a result of various kinds of breaches.
https://www.garlic.com/~lynn/subintegrity.html#harvest

There seemed to be little or nothing being done about such breaches ... and it was apparently felt that the publicity (as the result of breach notification) would prompt corrective action and countermeasures. Since that time numerous other states have passed similar legislation. There also has been numerous federal data breach bills proposed ... generally falling into one of two categories ... 1) (federal preemption) eliminating most breach notification and 2) bills similar to existing state legislation requiring breach notification.

--
virtualization experience starting Jan1968, online at home since Mar1970

zPDT paper

From: lynn@garlic.com (Lynn Wheeler)
Date: 19 July, 2010
Subject: zPDT paper
Blog: MainframeZone
As undergraduate in the '60s ... I had modified CMS to use a special CCW opcode (x"FF") for disk I/O ... that drastically reduced the CCW program translation overhead (in disk i/o) and performed the operation "synchronously" (aka the disk I/O SIO would complete with CC=1, CSW stored, indicating operation was complete). This eliminated a lot of extraneous CMS overhead and CP67 simulation. CMS would decide when it initially came up whether it was running on bare hardware ... or under cp67 virtual machine with enhanced facilities.

The people at the science center complained that I had violated the principles of operation with the CCW. In part, because it significantly reduced overhead to run CMS virtual machine, they designed an alternative. The principles of operation define that the "DIAGNOSE" instruction is model dependent implementation. They proposed a virtual machine model ... where the DIAGNOSE instruction had implementation specific to running in CP67 virtual machine. This is what shipped later in CP67/CMS product (still with the check by CMS to see if it was running on bare hardware ... and use standard disk i/o ... or running in virtual machine and use DIAGNOSE i/o).

In the morph from CP67 CMS (cambridge monitor system) to VM370 CMS (renamed conversational monitor system), the ability to run on bare hardware was removed.

--
virtualization experience starting Jan1968, online at home since Mar1970

How much is the Data Center charging for each mainframe user?

From: lynn@garlic.com (Lynn Wheeler)
Date: 19 July, 2010
Subject: How much is the Data Center charging for each mainframe user?
Blog: MainframeZone
long ago and far away ... mainframes were leased and had a system meter that was used to base monthly charges. The system meter ran whenever the CPU was executing and/or whenever there was active I/O. Furthermore, the system meter continued to run/coast for 400ms after both CPU and I/O was completely idle.

There was major problem with expansion of online commercial timesharing service bureaus to 7x24 operation ... since billing charges were based on recovering total infrastructure costs ... with the system-meter monthly lease being major component (in the days when software was still mostly free). The issue was that in early days, leaving the system up 7x24, showed relatively light load offshift ... typically not enough to actually cover offshift lease charges (assuming the system meter running constantly) w/o enormously inflating use charges (which would be major inhibitor to actually having offshift use). The trick was to find some I/O sequence that would still allow accepting incoming characters and connections ... but would allow the channels to quiesce when there was nothing going on (allowing the system meter to stop).

It was a major step forward in providing 7x24 online commercial timesharing service bureaus when channel program hack was deployed that allowed for incoming characters to be accepted and the system meter to stop when nothing was actually going on.

For whatever reason, the POK favorite son operating system had a major component that had a fixed timer wakeup that occurred every 400ms whether there was anything going on or not.

--
virtualization experience starting Jan1968, online at home since Mar1970

Wax ON Wax OFF -- Tuning VSAM considerations

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 19 July, 2010
Subject: Wax ON Wax OFF -- Tuning VSAM considerations.
Blog: MainframeZone
in the late 70s I started to get into trouble with the disk division by making statements about disk thruput had significant relative system thruput decline over a decade. This is table from early 80s (in old archived post from early 90s) ... showing relative system disk i/o thruput had an order of magnitude over a period of 15 yrs.
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executive took offense and assigned their performance group to refute the statements. After a few weeks ... they basically came back and said that I had slightly understated the decline in relative system disk thruput over the period. They eventually redid the spin on the subject and turned it into a SHARE user group presentation on optimizing application thruput involving disk i/o ... B874 @ SHARE 63 ... small piece of B874 included in this archived post
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

One of the things from the late 70s and 80s ... was leveraging the significant increase in electronic storage for caching as countermeasure to the significant decline in relative system disk i/o thruput. This shows up in some of the arguments between the IMS group and the System/R group (original relational/sql implementation) ... some past posts mentioning System/R
https://www.garlic.com/~lynn/submain.html#systemr

The IMS group pointed to the implicit relational indexes resulting in doubling disk storage requirements and possibly 4-5 increase in disk I/Os for processing the index. The System/R group would counter the significant admin overhead required to manage the direct record pointers (anytime IMS DBMS required re-org). In the 80s, disk capacities significantly increased while disk $$/mbyte signficantly decreased ... mitigating the doubling physical disk size. Also electronic memory sizes significantly increases ... allowing significant index cache ... eliminating some of the physical disk i/os involved in index processing.

--
virtualization experience starting Jan1968, online at home since Mar1970

OS idling

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS idling
Newsgroups: alt.os.development, alt.folklore.computers
Date: Mon, 19 Jul 2010 14:25:47 -0400
Louis Krupp <lkrupp_nospam@indra.com.invalid> writes:
Once upon a time, when I worked with Burroughs Large Systems (B6700, etc), there were two kinds of idle states: true idle and false idle. True idle happened when there was really very little to do. The B6700 OS (the "Master Control Program") used the register displays to show when the CPU was idling. False idle (which displayed the same pattern) happened when the system was low on physical memory and there was lots of swapping and paging to and from disk and the system spent a significant amount of time waiting for disk I/O to complete. Programs that could have been doing something useful were blocked waiting for memory. This state of affairs was called "thrashing," and back when memory was expensive, it happened a lot. (I remember a Burroughs salesman saying, back in about 1977, when I suggested that a particular mainframe configuration might be a little short on memory for the projected workload, "It's got a million and a half bytes!")

re:
https://www.garlic.com/~lynn/2010l.html#23 OS idling
https://www.garlic.com/~lynn/2010l.html#24 OS idling
https://www.garlic.com/~lynn/2010l.html#27 OS idling

in the late 70s ... I was starting to make comments about the significant decline in relative system disk i/o thruput (over period of decade) ... and becoming a major bottleneck (disk i/o thruput got a little bit faster ... but processor speed and amount of real storage significantly increased). in the early 80s, i was claiming that the relative system thruput decline was something like an order of magnitude over approx. 15yr period. post from early 90s with table from the early 80s:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executive took offense and assigned their performance group to refute the statements. After a few weeks ... they basically came back and said that I had slightly understated the decline in relative system disk thruput over the period. They eventually redid the spin on the subject and turned it into a SHARE user group presentation on optimizing application thruput involving disk i/o ... B874 @ SHARE 63 ... small piece of B874 included in this archived post
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

One of the things from the late 70s and 80s ... was leveraging the significant increase in electronic storage for caching to compensate for the significant decline in relative system disk i/o thruput ... aka 60s direct use of electronic storage for application processor execution was starting to be dominanted by using electronic storage more & more to avoid doing disk i/o.

This shows up in some of the arguments between the IMS DBMS group and the System/R group (original relational/sql implementation) ... some past posts mentioning System/R
https://www.garlic.com/~lynn/submain.html#systemr

The IMS group pointed to the implicit relational indexes resulting in doubling disk storage requirements and possibly 4-5 increase in disk I/Os for processing the index. The System/R group would counter the significant admin overhead required to manage the direct record pointers (anytime IMS DBMS required re-org). In the 80s, disk capacities significantly increased while disk $$/mbyte signficantly decreased ... mitigating the doubling physical disk size. Also electronic memory sizes significantly increases ... allowing significant index cache ... eliminating some of the physical disk i/os involved in index processing.

This showed up in the comparison of mainframe high-end 3033 in the late 70s versus 4341s ... when system architecture limited to 16mbytes of real storage. For less cost of 3033, it was possible to get do a cluster of five-six 4341s ... which had slight higher aggregate MIP-rate, between 2-3 times the aggregate I/O capacity, and each 4341 could have its own 16mbytes of real storage (5-6 times aggregate of 3033).

Eventually there was a 32mbyte real storage hack for 3033 ... that slightly compensated. Instructions couldn't address more than 16mbytes in either real or virtual mode. The 370 page-table-entry was half-word, 16bits, ... with 12bits for 4k-byte real page number (16mbytes total), two defined bits and two undefined bits. They took the two undefined bits and prefixed them to the real page number ... providing for 14bits 4k-byte real page number (64mbytes total). Then there was a gimick involved in getting virtual pages into the area above the 16mbyte real storage line.

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 20 Jul 2010 12:21:04 -0700
wmhblair@COMCAST.NET (William H. Blair) writes:
Younger and newer programmers followed the habits of those who came before them. Many of those who first ventured into "OS" extensions and "neat, useful programs" did so on what today would be considered unusably slow computers (mostly due to I/O). In addition, output was to a line printer, and hundred-page Assembler program listings were to be avoided. Today we save such things in various places on disk drives, and only rarely actually kill trees. In the very early days "OS" macros were not easily or directly available, and most folks didn't bother to create their own versions of DSECTs for things that you could not get out of IBM in the first place. Fewer pages and fewer expanded -- not to mention, printed -- DSECT macros were virtues. All knew where the CVT pointer was; nobody bothered with the CVT DSECT macro, much less the PSA DSECT -- which did not then even exist.

the folklore is that the person doing opcode lookup in os/360 assembler (including F) was told they had 256 bytes to do the implementation ... as a result the table was kept on disk ... requiring rereading the table for each assembler statement.

i had 2000 statement assembler program that took nearly 30 minutes to assemble under os/360 release 6 on 64kbyte 360/30. It it had conditional assembly for either "standalone" ... included its own interrupt handler, monitor, device drivers, error recovery, etc ... or "os/360" mode ... used five DCB macros. If assembled for os/360 mode it took nearly an hour elapsed time ... with the assembler taking six minutes elapsed time per DCB macro.

In any case, a later performance boost for assembler F ... was to keep the opcode table in storage.

It wasn't so much that I/O was slow ... it was that an enormous amount of I/O was being done to compensate for lack of real storage.

Univ. had 709 running tape-to-tape ibsys monitor (1401 doing UR<->tape front end) that took approx. second per student job. Moved to 768kbyte 360/65 (actually 67 running in 65 mode) os/360 mft14 with HASP ... took over half a minute for same student job ... using 3-step fortran G, link-edit, and go. Much of it was job-scheduler for each step loading an enormous number of different PDS members from disk (each time).

With os/360 release 11, I had started doing product "job-stream" stage-2 sysgens ... where I carefully re-ordered the stage-2 sysgen statements to optimize placement of files & PDS members on disk ... getting approx. 300% elapsed time improvement for the student job workload (because everything was so disk intensive ... so there was significant benefit in optimizing arm seek with careful location of files & members on disks).

Almost all of the univ. workload was done with os/360 running on bare hardware (360/67 as a 360/65 w/o virtual memory) ... although they let me play with the machine on the weekend as 360/67 using (virtual machine) cp67. This old post has part of presentation that I did at the fll68 share meeting in Atlantic City:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

it discusses a bunch of cp67 that I had rewritten to improve pathlengths and other things, mentions comparing MFT14 running under cp67 ... with & w/o the cp67 pathlength rewrite ... but also mentions that the workload under vanilla os/360 MFT14 on bare machine ran about the same elapsed time as the optimized MFT14 ran under unmodified cp67.

with increase in real storage ... it was possible to reduce the intensity of disk activity (keeping more stuff in real storage). recent posts mentioning by early 80s ... that the relative system disk I/O thruput had declined by (better than) ten times from late 60s (i.e. ratio of disk i/os per millions of instructions executed had to dramatically descrease)
https://www.garlic.com/~lynn/2010c.html#1 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010h.html#70 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010l.html#31 Wax ON Wax OFF -- Tuning VSAM considerations
https://www.garlic.com/~lynn/2010l.html#32 OS idling

old post with (late 60s/early 80s) comparison
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

which eventually resulted in B874 @ SHARE 63 ... referenced here:
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Wed, 21 Jul 2010 09:23:47 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
I wouldn't call that RISC. It was elegant, but not reduced.

i've periodically asserted that John went to RISC to go to the opposite extreme complexity of the (failed) future system effort ... misc. posts mentioning future system:
https://www.garlic.com/~lynn/submain.html#futuresys

misc. posts mentioning risc, 801, iliad, romp, rios, etc
https://www.garlic.com/~lynn/subtopic.html#801

the corporate tribute page has gone 404 ... but a few other pages:
http://www.thocp.net/biographies/cocke_john.htm
http://www.iment.com/maida/tv/computer/johncocke.htm

here is an interesting comment (which might be applied to the future system/risc comparison)

POWER to the people
http://www.ibm.com/developerworks/power/library/pa-powerppl/

from above:
IBM's John Cocke was no stranger to the battle against complexity. He had already worked on the IBM Stretch computer, a rival to the IBM 704 mainframe, and on Stretch successor ACS (Advanced Computing Systems), rival to the 704's successor, the S/360.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

TSSO - Hardcoded Offsets - Etc

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TSSO - Hardcoded Offsets - Etc
Newsgroups: bit.listserv.ibm-main
Date: 21 Jul 2010 06:43:49 -0700
Jim.marshall@OPM.GOV (Jim Marshall) writes:
The comments are warranted although there is more to this than meets the eye. Since you cam blame me for unleashing TSSO on the SHAREWARE world you need to understand its origins. Back in the Air Force Data Services Center in the Pentagon in the 1970s as a Sgt we were running an IBM 360-75J and had just gotten in the first shipped IBM 30XX which was the IBM 3032 serial #6. So we had to convert from MVT to MVS and we contracted to PRC for assistance. In the door came Bill Godfrey, Carl Goswick, etc to assist us. Along with converting the MVT to a timesharing machine (putting up TSO & HASP 3.1 with PRC Mods), PRC helped us get MVS/JES2/TSO/etc into production to take over the MVT workload. Then the 360 would be available for dial-up Timesharing.

re:
https://www.garlic.com/~lynn/2010l.html#33 History of Hard-code Offsets

note that 3032 was just a renamed 370/168-3 ... slight rework instead of external 28*0 channels, used 303x channel director; ... which was actually a repackaged 370/158 with just the integrated channel microcode (and the 370 microcode removed).

similarly 3031 was a pair of 370/158s engines ... instead of single engine (with both 370 and integrated channel microcode) ... there was the "3031" engine with just the 370 microcode and the channel director engine with just the integrated channel microcode.

the 3033 was somewhat "new" box ... started out being 168 wiring diagram mapped to 20% faster chips. the chips also had something like 10 times the number of circuits ... but initially was to go unused. during the development cycle ... there was some rework done to better take advantage of some "on-chip" operations ... eventually resulting in 3033 being approx. 50% faster than 168-3 (instead of only 20% faster).

old email from late 70s mentioning AFDS
https://www.garlic.com/~lynn/2001m.html#email790404
https://www.garlic.com/~lynn/2001m.html#email790404b

started out looking at doing 20 4341 ... but growing to 210. other old email mentioning 43xx
https://www.garlic.com/~lynn/2001m.html#4341

--
virtualization experience starting Jan1968, online at home since Mar1970

Great things happened in 1973

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 21 July, 2010
Subject: Great things happened in 1973
Blog: MainframeZone
old email item from 1973
https://www.garlic.com/~lynn/2006v.html#email731212

article done spring '05 ... they sent out photographer to the house to take pictures for the print version
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

... I had gotten blamed for online computer conferencing on the internal network in the late 70s and early 80s (the internal network was larger than the arpanet/internet from just about the beginning until sometime late '85 or early '86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

references to the original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

old email referencing Jim palming off consulting to IMS group when he left for Tandem
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

Jim and I were keynotes at a dependable computer workshop
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

old post referencing tribute for Jim at Berkeley
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hacking -- Fact or Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 21 July, 2010
Subject: Mainframe Hacking -- Fact or Fiction
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010l.html#28 Mainframe Hacking -- Fact or Fiction

from long ago and far away:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

as undergraduate ... I was doing large number of kernel changes ... and would periodically get requests of various kinds from the vendor. While I didn't learn about these guys until much later ... in retrospect some of the vendor requests where of the kind that may have originated from them.

now some of the ctss people went to the science center on the 4th flr (where virtual machines, online computing, timesharing, GML, internal networks, etc ... was done), others went to multics on the 5th flr. as a result, there was some modicum of competition between the two flrs. old reference to their mainframe:
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

not long after having graduated and joining the vendor ... i got asked to run around with the new CSO the vendor had hired; had come from long years of prestige gov. service (involving physical security); I was suppose to provide some context regarding computer security

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 21 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
25 People to Blame for the Financial Crisis; Phil Gramm
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

from above:
He played a leading role in writing and pushing through Congress the 1999 repeal of the Depression-era Glass-Steagall Act, which separated commercial banks from Wall Street. He also inserted a key provision into the 2000 Commodity Futures Modernization Act that exempted over-the-counter derivatives like credit-default swaps from regulation by the Commodity Futures Trading Commission. Credit-default swaps took down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

Gramm and the 'Enron Loophole'
http://www.nytimes.com/2008/11/17/business/17grammside.html

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and Mr. Gramm's wife, Wendy, served on the Enron board, which she joined after stepping down as chairwoman of the Commodity Futures Trading Commission.

... snip ...

Phil Gramm's Enron Favor
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

from above:
A few days after she got the ball rolling on the exemption, Wendy Gramm resigned from the commission. Enron soon appointed her to its board of directors, where she served on the audit committee, which oversees the inner financial workings of the corporation. For this, the company paid her between $915,000 and $1.85 million in stocks and dividends, as much as $50,000 in annual salary, and $176,000 in attendance fees ...

... snip ...

Greenspan Slept as Off-Books Debt Escaped Scrutiny
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of the Commodity Futures Trading Commission, to study regulating over-the-counter derivatives. In 2000, Congress passed a law keeping them unregulated.

... snip ...

In the aftermath of Enron, Congress passed Sarbanes-Oxley ... in theory giving SEC powers to prevent anything like Enron from happening again.

Possibly because the GAO didn't believe that SEC was doing anything ... even after Sarbanes-Oxley and all the new audit procedures, GAO started publishing reports of public company financial reports (subject to SEC and Sarbanes-Oxley) that they considered fraudulent and/or in error.
https://www.gao.gov/products/gao-06-1079sp
http://www.gao.gov/new.items/d03395r.pdf
http://www.gao.gov/new.items/d06678.pdf
http://www.gao.gov/new.items/d061053r.pdf

with the uptick in fraudulent reports even after Sarbanes-Oxley ... how to spin the significant audits imposed by Sarbanes-Oxley:

The person that tried for a decade to get SEC to do something about Madoff, testified in Congressional hearings that tips turn up 13 times more fraud than audits (and that SEC didn't have a tip hotline ... but had a 800 number for companies to complain about audits).

Early last year, I was asked about HTML'izing the Percora hearings (transcripts had been scanned the previous fall at boston library and files were up on wayback machine) ... some assumption that the new congress had appitite to do something about the financial mess. Lots of xrefs and indexing as well as connections between what happened then and corresponding activity now. However, sometime later ... i got a call that there wasn't the appetite after all.

In the fall 2008 congressional hearings into the rating agencies, there was testimony that the unregulated loan orginators were able to package up loans & mortgages into toxic CDOs and pay the rating agencies to get triple-A ratings (even tho both the toxic CDO sellers and the rating agencies knew they weren't worth triple-A ratings). As a result, the unregulated loan originators could unload every loan they could make without regard to loan quality or borrower's qualification; as a result the only limit on their revenue was how many, how fast, and how large the loans they could write. CDOs had been done during the S&L crisis to obfuscate the underlying values ... but w/o triple-A ratings there wasn't a lot of activity. Being able to pay for triple-A ratings resulted in something like $27T worth of toxic CDOs being done during the period ... snapped up by retirement funds and others that would only deal in triple-A rated (safe) instruments.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

The Man Who Beat The Shorts
http://www.forbes.com/forbes/2008/1117/114.html

from above:
Watsa's only sin was in being a little too early with his prediction that the era of credit expansion would end badly. This is what he said in Fairfax's 2003 annual report: "It seems to us that securitization eliminates the incentive for the originator of [a] loan to be credit sensitive. Prior to securitization, the dealer would be very concerned about who was given credit to buy an automobile. With securitization, the dealer (almost) does not care."

... snip ...

Bernanke Says Crisis Damage Likely to Be Long-Lasting
http://www.bloomberg.com/apps/news?pid=20601087&sid=arpJXeelvfY4&refer=home

from above (something of an understatement):
Bernanke said the packaging and sale of mortgages into securities "appears to have been one source of the decline in underwriting standards" because originators have less stake in the risk of a loan.

... snip ...

Sarbanes-Oxley supposedly had SEC also look into rating agencies ... but there doesn't seem to have been anything but:

Report on the Role and Function of Credit Rating Agencies in the Operation of the Securities Markets; As Required by Section 702(b) of the Sarbanes-Oxley Act of 2002
http://www.sec.gov/news/studies/credratingreport0103.pdf

The funding via the triple-A rated toxic CDOs is analogous to the BROKERS' LOANS, from Glass-Steagall (Pecora) hearings, pg. 7281
BROKERS' LOANS AND INDUSTRIAL DEPRESSION

For the purpose of making it perfectly clear that the present industrial depression was due to the inflation of credit on brokers' loans, as obtained from the Bureau of Research of the Federal Reserve Board, the figures show that the inflation of credit for speculative purposes on stock exchanges were responsible directly for a rise in the average of quotations of the stocks from sixty in 1922 to 225 in 1929 to 35 in 1932 and that the change in the value of such Stocks listed on the New York Stock Exchange went through the same identical changes in almost identical percentages.


... snip ...

difference is that instead of speculation in the stock market ... it was speculation in the real estate market. speculators could get no-documentation, no-down, 1% interest-only payment ARMs ... planning on flipping before rates adjusted; with real estate inflation 20% in some parts of the country ... there could be something like 2000% ROI.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Thu, 22 Jul 2010 09:33:21 -0400
Mike Hore <mike_horeREM@OVE.invalid.aapt.net.au> writes:
Hi Lynn - I know they aren't your words - someone could read that paragraph of the article to mean that the Stretch was on the side of simplicity versus the complexity of the IBM 704 -- while Stretch was *enormously* more complex than the 704, and IMHO is still one of the most complex systems ever made!!

re:
https://www.garlic.com/~lynn/2010l.html#34 Age

aka ... John, having worked on such systems ... then does it imply that he favored such (hardware) complexity or learned something about what can mean. Early risc/801 work in the 70s had hardware being significantly simplified ... the simplicity of the hardware being compensated for by much more sophisticated/complex software. misc. past posts mentioning 801/risc
https://www.garlic.com/~lynn/subtopic.html#801

this references thousands of people worked on future system:

The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
This first quiet warning was taken seriously: 2,500 people were mobilised for the FS project. Those in charge had the right to choose people from any IBM units. I was working in Paris when I was picked out of the blue to be sent to New York. Proof of the faith people had in IBM is that I never heard of anyone refusing to move, nor regretting it. However, other quiet warnings were taken less seriously.

... snip ...

thousands of people involved in FS from all across the corporation. some number coming out of the experience possibly have learned something about complexity.

also from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

there have been references that if any other vendor had spent that much money on such an unsuccessful effort ... they wouldn't have survived.

FS effort was motivated by clone controllers. I've mentioned before having worked on clone controller as undergraduate in the 60s ... four of us later getting written up as being responsible for clone controller business ... some past posts
https://www.garlic.com/~lynn/submain.html#360pcm

also, I've mentioned in the past that during the effort, I would periodically make various sarcastic comments about FS ... including analogies with long running cult film playing in Central sq. (wasn't exactly career enhancing, aka also at least one exception to above article comment) ... misc. past posts mentioning future system effort:
https://www.garlic.com/~lynn/submain.html#futuresys

here is another kind of reference to FS hardware complexity
http://www.jfsowa.com/computer/memo125.htm

this has quote from book looking at FS effort ... and the long term effects on the corporation after its failure:
https://www.garlic.com/~lynn/2001f.html#33

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 22 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?

There were a number of articles from 2008 that risk management was playing 2nd fiddle to whatever the business managers wanted ... with risk department being told to do things like adjusting the inputs until the desired output/results came up (aka GIGO ... with the issue of the complexity of the risk software obfuscating what was really going on).

Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/
The crash of 2008: A mathematician's view
http://www.eurekalert.org/pub_releases/2008-12/w-tco120808.ph
How Wall Street Lied to Its Computers
http://bits.blogs.nytimes.com/2008/09/18/how-wall-streets-quants-lied-to-their-computers/

In 2008 ... there were number of risk managers coming out that a major change needed was not letting the business people stomp all over the risk department.

we had been asked in the late 90s to look at countermeasures to various kind of CDO fiddling that occurred during the S&L crisis ..... things like mortgages with falsified appraisals (their favorite example was large business complex in Dallas that turned out to be empty lot).

in this latest round they bypassed all that. There were no-documentation, no-down, 1% interest-only payment ARMs; with no documentation ... there was no falsified documents ... aka they would just pay the rating agencies for triple-A rating ... and could skip all the other stuff.

early 2008 article from wharton
http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933

that has gone 404 ... or at least behind some sort of registration requirement, but is still available from the wayback machine
https://web.archive.org/web/20080606084328/http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933

makes reference to estimate that something like 1,000 executives are responsible for 80% of the mess ... and it would go a long way to fixing the problems if the gov. could figure out how those might loose their jobs.

in the case of the triple-A rated toxic CDOs, there was enormous personal financial motivation by significant number of people .... fees, commissions, bonuses, etc ... with little personal downside ... more than enough motivation to offset any possible concern about downside to the institutions, economy, and/or country.

The triple-A rated toxic CDOs involve numerous transactions as they move thru the infrastructure ... from the original loan/mortgage thru the securitization process, ratings & selling ... with numerous places along the way involving fees, commissions, bonuses, etc. Say it adds up to aggregate around 20% ... as various people taking their piece from the various kinds of transactions along the way ... with aggregate estimate of $27T in triple-A rated toxic CDOs during the period ... that comes out to a little over $5T during the period (being pocketed in the form of fees, commissions, bonuses).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

The TARP funds were originally appropriated to buy up these toxic assets ... but the amount appropriated would have hardly made a dent in the problem (and/or prevented things from unraveling). Just the four largest too-big-to-fail institutions were reported to be carrying (off-balance) $5.2T in the troubled assets ye2008. As a result, radically different approaches had to be used.

The issue for the federal reserve was that its primary leverage is with the regulated depository institutions; the whole securitized loans and triple-A rated toxic CDOs managed to side-step the regulated environment as the transactions meandered through the economic landscape. Even the $5.2T that eventually found its way to the (off-balance) books of the four largest too-big-to-fail institutions ... got there by way of their unregulated investment banking arms (courtesy of GLBA & repeal of Glass-Steagall).

Surveying the Wreckage; What can we learn from the top books on the financial crisis?
http://www.city-journal.org/2010/20_3_financial-crisis-books.html

--
virtualization experience starting Jan1968, online at home since Mar1970

History--automated payroll processing by other than a computer?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History--automated payroll processing by other than a computer?
Newsgroups: alt.folklore.computers
Date: Thu, 22 Jul 2010 10:33:22 -0400
jmfbahciv <See.above@aol.com> writes:
That's an awful CATCH-22! Bit gods require beer to generate the code run on machines. If they don't get beer, they don't generate code. If they don't generate code, the cash registers won't work. If the cash register doesn't work, the bars can't pull beer. If they can't pull beer, the bit gods can't drink.

recent reference to deli moving into strip mall across from bldg. 28 ... and keeping back room for friday afterwork get togethers ... and letting us have pitchers of anchor steam at half price.
https://www.garlic.com/~lynn/2010j.html#76 What is the protocal for GMT offset in SMTP (e-mail) header

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM zEnterprise Announced

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM zEnterprise Announced
Newsgroups: comp.arch
Date: Thu, 22 Jul 2010 11:51:20 -0400
"David L. Craig" <dlc.usa@gmail.com> writes:
The latest iteration of IBM mainframe architecture has finally been announced. This box includes x86 and POWER blades inside the frame with special interconnection infrastructure. For more info visit:

i had done a design for something similar in '85 using blue iliad (1st 32bit 801, never did finish, was really big and ran hot) and roman 370 chipset (no x86 at the time) ... long winded old post
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor

old email mentioning roman
https://www.garlic.com/~lynn/2007c.html#email850712

past posts mentioning 801, risc, iliad, romp, rios, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

PROP instead of POPS, PoO, et al

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: PROP instead of POPS, PoO, et al.
Newsgroups: bit.listserv.ibm-main
Date: 22 Jul 2010 13:35:25 -0700
elardus.engelbrecht@SITA.CO.ZA (Elardus Engelbrecht) writes:
From ADSM to TSM RACF to Security Server MVS/XA - MVS/ESA - OS/390 - z/OS etc... (can't remember now what ... )

ADSM goes back to CMSBACK I did in the late 70s ... was distributed internally ... it was finally released as workstation datasave facility ... with client front-ends that included support for backing up files to the server backend.

It then became ADSM ... along with the disk division getting renamed ADSTAR and looked like ADSTAR would be spun off (new management reversed that decision). When disk division was finally unloaded, ADSM was kept ... but moved into another organization and renamed TSM.

Following lists the original/first "release" as Workstation DataSave Facility (WDSF40 for VM) September 9, 1990:
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

recent cmsback/adsm/tsm thread in linkedin group
https://www.garlic.com/~lynn/2010l.html#0
https://www.garlic.com/~lynn/2010l.html#18
https://www.garlic.com/~lynn/2010l.html#19
https://www.garlic.com/~lynn/2010l.html#22

... for other drift, PROP ... long ago & far away, stood for "Programmable OPerator" (later PRogrammed OPerator)

--
virtualization experience starting Jan1968, online at home since Mar1970

PROP instead of POPS, PoO, et al

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: PROP instead of POPS, PoO, et al.
Newsgroups: bit.listserv.ibm-main
Date: 23 Jul 2010 05:24:13 -0700
peter.hunkeler@CREDIT-SUISSE.COM (Hunkeler Peter , KIUP 4) writes:
Isn't IBM nice? Being white when delivered, everybody can paint it the way he/she likes it most..

i've done a q&d conversion of the old (internal) greencard ios3270 file to html.
https://www.garlic.com/~lynn/gcard.html

i've tried to approximate the background of old fanfold greencard color ... currently i'm using #80c080 ... but it lacks the feel. i may have to resort to scanning an old greencard ... and snipping large blank section as background .... a little like background for Col John Boyd related info (aka I had sponsored Boyd's briefings at IBM):
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

PROP instead of POPS, PoO, et al

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: PROP instead of POPS, PoO, et al.
Newsgroups: bit.listserv.ibm-main
Date: 23 Jul 2010 09:17:36 -0700
donbwms@GMAIL.COM (Don Williams) writes:
PCP -> MFT -> MVT -> MVS... IBM sales was changing the name even when it was

pcp, mft/mft-ii, & mvt were all sysgen options for os/360 (as opposed to dos/360)

for 370 virtual memory there was DOS->DOS/VS; MFT->OS/VS1 and MVT->OS/VS2 (and cp67->vm370).

initial OS/VS2 Release 1 was "SVS" ... basically MVT laid out in (single) 16mbyte virtual address space; with a little bit of logic to handle the virtual memory tables (paging, page faults) and initially CCWTRANS (borrowed from CP67) cobbled into EXCP to handle channel program translation (aka make a copy of the channel program, substituting real addresses for the virtual addresses).

things then got a little confused with Future System effort interrupting 370 activity ... recent reference discussing some of the issues
http://www.jfsowa.com/computer/memo125.htm

OS/VS2 Release 2 was "MVS" and suppose to be just a temporary stepping stone to OS/VS2 Release 3 ... the "Future System" operating system.

aka FS was going to completely replace 370 ... as different from 360/370 as 360 had been different from prior generations. Since FS was going to completely replace 370 ... the 370 product (hardware & software) pipelines were allowed to go dry. When FS was finally killed, there was then mad rush to get stuff back into the 370 product pipelines ... like the 303x stuff ... recent reference to 3031 being 158, 3032 being 168, and 3033 was 168 wiring spec using faster chips
https://www.garlic.com/~lynn/2010l.html#35 TSSO - Hardcoded Offsets - Etc

misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

In parallel with the mad rush to get stuff pack into 370 product pipelines ... there were efforts to start work on compatible follow-on generations to 370. The high-end 370 compatible follow-on eventually was referred to as "811", codename for the XA architecture documents. The low & mid-range did the (different follow-on) "E" architecture.

The "E" architecture begate VSE ... and the "XA" architecture begate MVS/XA.

i've mentioned before that the POK favorite son operating system managed to convince corporate that it was necessary to kill the vm370 product, shutdown the vm370 development group (in burlington mall) and transfer all the people to POK (or otherwise they would miss their FCS schedule). Endicott managed to save the 370 product mission but essentially had to reconstitute a development group from scratch.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Fri, 23 Jul 2010 19:03:31 -0400
"John Varela" <newlamps@verizon.net> writes:
MIT built Whirlwind in the early '50s but didn't award its first Bachelor's degree in CS until 1975.

then by the mid-80s ... there was enormous influx of students wanting to major in computers. MIT had a study looking at if they didn't put any controls on it ... nearly all incoming freshman would be CS ... and there wouldn't be any other kind of classes taught.

non-computer MIT trivia from the 50s:
http://tech.mit.edu/V74/PDF/N2.pdf
http://tech.mit.edu/V74/PDF/N13.pdf
http://tech.mit.edu/V76/PDF/N23.pdf

mentions my wife's dad being in charge of dept at MIT (after west point, they had sent him to berkeley for advanced engineering degree).

--
virtualization experience starting Jan1968, online at home since Mar1970

C-I-C-S vs KICKS

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: C-I-C-S vs KICKS
Newsgroups: bit.listserv.ibm-main
Date: 24 Jul 2010 06:37:32 -0700
poncelet@BCS.ORG.UK (CM Poncelet) writes:
That is what an ex-IBMer from the old days told me 'CICS' originally stood for - before it was renamed as 'Customer Information Control System' and sold to the rest of the world. I have no supporting evidence apart from this hearsay.

I was undergraduate at the univ. and responsible for os/360 ... and also did a bunch of cp67 stuff. the univ. library got an ONR grant to do online catalog ... they used some of the money to buy a 2321 (datacell). that effort also got selected to be beta-test for CICS product release ... and I got tasked to (also) debug/support CICS.

I was told it was originally developed at some utility customer ... before being selected for release. One of the bugs I remember tracking down turned out to be an BDAM OPEN bug ... it involved some conflict between the BDAM features used in the original implementation and the BDAM features selected by the library.

misc. past posts mentioning CICS (&/or BDAM)
https://www.garlic.com/~lynn/submain.html#cics

doesn't mention any of that here:
http://www-01.ibm.com/support/docview.wss?uid=swg21025234

this has a little more (but can't always trust wiki)
https://en.wikipedia.org/wiki/CICS

from above:
The first CICS product was released in 1968, named Public Utility Customer Information Control System, or PU-CICS. CICS was originally developed to address requirements from the public utility industry, but it became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product.

... snip ...

cics specific wiki
http://cicswiki.org/cicswiki1/index.php?title=History

from above:
CICS is born

In 1968, CICS became available as a free, Type II Application Program, with users in every industry category. Transamerica in Los Angeles, Northern Indiana Public Service Company (NIPSCO), Colorado Public Service of Colorado, United Airlines, and many others were early adopters of the software known as CICS. The other accounts which had begun to develop their own approach (Commonwealth Edison, ConEd, etc) continued with their proprietary software.

In 1969, IBM announced Program Products and CICS was no longer a free software offering. This did inhibit sales and customer acceptance. CICS enabled customers, in any industry, to quickly implement their online systems, most of which were inquiry only at that time.


... snip ...

aka 23jun1969 ... "unbundling announcement" ... starting to charge for software, se services, maint. etc. misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

this use to be an authoritative source for things CICS
http://www.yelavich.com/

but it has gone 404 ... although it still lives on at the wayback machine
https://web.archive.org/web/19990427231345/http://www.yelavich.com/

CICS Reference Information & Trivia
https://web.archive.org/web/20010104201400/www.yelavich.com/5000fram.htm

CICS-Related Announcements, 1968-Present
https://web.archive.org/web/20010709064102/www.yelavich.com/5100cont.htm

from above ... this claims available mid-69 (which better corresponds to my fading memory) ... as opposed to the above reference of availble in in 1968
P68-66 4/29/68 Three New Type II Programs on Information Systems to be Available Mid 1969

Overview - Generalized Information System (Basic) - Information Management System/360 - Public Utility Customer Information Control System

Generalized Information System Basic (GIS) - Data set creation, maintenance, retrieval

Information Management System/360 (IMS/360) - Implementation of medium to large data bases - Teleprocessing and conventional batch processing - Operate under MFT-II or MVT - Highlights - Messages to/from remote input/output devices - Aplications scheduled concurrently under unique storage protection key of OS/360 - System provides checkpoint/restart capabilities

Public Utility Customer Information Control System - Planned availability - June 30, 1969 - Overview - Control system structure for installation of electric, gas and telephone information systems - Designed for inquiry and order entry applications - Highlights - Macro instructions for user communication with his input-output devices and terminals - Control programming services reduce programming by the user - Provides multi-programming capabilities - Serviceability of system components to maximize availability - Records system performance statistics - Uses OS/360 services


... snip ...

some other cics historical trivia
https://web.archive.org/web/20010721114117/www.yelavich.com/dfh.htm

from above:
I have been involved with CICS since 1968. I taught Ben Riggins and his (then) small staff, BTAM. Ben Riggins is the father of CICS. He was a CE/FE at one time and switched over to being an SE. He was located in Richmond, Virginia when he came up with the idea for what we know today as CICS. His account at that time was Virginia Electric Power Co (VEPCO).

... snip ...

the above also speculates where the CICS 3-letter prefix "DFH" came from.

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 24 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?

New York State Office of the Comptroller did release on wall street bonuses during the real estate bubble (when they supposedly ran thru something like $27T in triple-A rated toxic CDO transactions).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt
Bonuses spiked some 400% during the bubble ... and then after the financial mess cratered, there has been a lot of activity to avoid having bonuses return to their pre-bubble levels.

The Fed's Too Easy on Wall Street
http://www.businessweek.com/#missing-article

from above:
Here's a staggering figure to contemplate: New York City securities industry firms paid out a total of $137 billion in employee bonuses from 2002 to 2007, according to figures compiled by the New York State Office of the Comptroller. Let's break that down: Wall Street honchos earned a bonus of $9.8 billion in 2002, $15.8 billion in 2003, $18.6 billion in 2004, $25.7 billion in 2005, $33.9 billion in 2006, and $33.2 billion in 2007.

... snip ...

estimating that the total take on $27T in triple-A rated toxic CDO transactions aggregated around 20% or approx. $5T ... there is still a lot out there ... even after taking out the $137B.

after bubble burst and it all came crashing down ... and business way off ... goldman still paid out over $10B for 2008 ... which was more than the total wall street bonuses before the bubble started to inflate .... from early 2009:

Bailed-Out Banks Dole Out Bonuses; Goldman Sachs, CitiGroup, Others Mum on How They Are Using TARP Cash
http://abcnews.go.com/WN/Business/story?id=6498680&page=1

from above:
Goldman Sachs, which accepted $10 billion in government money, and lost $2.1 billion last quarter, announced Tuesday that it handed out $10.93 billion in benefits, bonuses, and compensation for the year.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

James Gosling

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: James Gosling
Newsgroups: alt.folklore.computers
Date: Sat, 24 Jul 2010 15:44:43 -0400
greymausg writes:
There is a post in one of the tex newsgroups, comp.text.tex purportedly about James Gosling, leaving Oracle, and damning Larry. Is this true?.

Gosling Hints He Left Oracle over Money; His remarks suggest Oracle couldn't pay Gosling enough to stay there
http://java.sys-con.com/node/1365232

... misc other from april

Java founder James Gosling leaves Oracle
http://www.infoworld.com/d/the-industry-standard/java-founder-james-gosling-leaves-oracle-214?source=rss_infoworld_news
Java Founder James Gosling Leaves Oracle
http://news.yahoo.com/s/pcworld/20100410/tc_pcworld/javafounderjamesgoslingleavesoracle
Java founder James Gosling leaves Oracle
http://www.networkworld.com/news/2010/042110-gosling-vows-to-stay-involved.html
Java founder James Gosling leaves Oracle
http://www.computerworld.com/s/article/9175218/Java_founder_James_Gosling_leaves_Oracle
Creator of Java, James Gosling resigns from Oracle
http://www.fiercecio.com/techwatch/story/creator-java-james-gosling-resigns-oracle/2010-04-13
'Father Of Java' Gosling Not Happy, Resigns From Oracle
http://www.crn.com/software/224300028
Though out of Oracle, Gosling vows to stay involved with Java
http://www.computerworld.com/s/article/9175891/Though_out_of_Oracle_Gosling_vows_to_stay_involved_with_Java
Gosling vows to stay involved with Java
http://news.yahoo.com/s/infoworld/20100421/tc_infoworld/121423
Gosling vows to stay involved with Java Application development
http://www.infoworld.com/t/application-development/gosling-vows-stay-involved-java-423
Gosling vows to stay involved with Java
http://www.networkworld.com/news/2010/042710-facebook-seeks-to-meet-with.html

... from may

James Gosling praises Oracle's Java technology updates
http://www.networkworld.com/news/2010/051110-amazon-web-services-sees-infrastructure.html
James Gosling praises Oracle's Java technology updates
http://www.infoworld.com/d/developer-world/james-gosling-praises-oracles-java-technology-updates-603

--
virtualization experience starting Jan1968, online at home since Mar1970

C-I-C-S vs KICKS

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: C-I-C-S vs KICKS
Newsgroups: bit.listserv.ibm-main
Date: 24 Jul 2010 20:34:28 -0700
poncelet@BCS.ORG.UK (CM Poncelet) writes:
For what it's worth, the chap who told me that CICS' original name was "Cincinnati Information Control System" also said that "DFH" stood for "Denver Foot Hills"; but no one has ever confirmed this. I once asked Pete Sadler whether he could explain where "DFH" came from (because of IMS's similar "DFS" prefix): he said the prefixes had no particular meaning as far as he knew. So I guess that puts the lid on it. Thanks for all the other info, BTW. Cheers, Chris Poncelet

re:
https://www.garlic.com/~lynn/2010l.html#47 C-I-C-S vs KICKS

the referenced speculation about "DFH" was that some corporate body assigned the 3-letter prefix.

as an aside ... within a couple years of CICS getting "DFH" & IMS getting "DFS" ... in the morph of cp67 to vm370 ... (somebody) gave "DMK" to the cp kernel amd "DMS" to cms (cambridge monitor system renamed to conversational monitor system) ... also no apparent reason.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hacking -- Fact or Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 July, 2010
Subject: Mainframe Hacking -- Fact or Fiction
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010l.html#28 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010l.html#37 Mainframe Hacking -- Fact or Fiction

When I was undergraduate in the 60s, I got brought in to help getting BCS (boeing computer services) going (being one of the first dozen or so BCS employees). Would periodically visit renton datacenter ... which claimed to have something like $300M in IBM mainframe. It was being duplicated up at Everett 747 plant ... since an analysis showed that a week's outage would cost the company more than the cost of the datacenter (there is disaster scenario where Mt. Rainier warms up and resulting mud slide takes out Renton). Given current solar panel efficiency ... it would seem like a large portion of the pacific northwest would have to be blanketed to obtain sufficient solar power (of course, they have all that hydro-electric power ... seem to contribute to the new generation of "mega-datacenters" going into the area).

Later I sponsored Col. Boyd's briefings at IBM. One of Boyd's biographies mentions that he did a tour running "spook base" (about the time I was at BCS) ... and spook base was a "$2.5B windfall" for IBM (does that mean nearly 10 times larger than renton datacenter?). misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Mon, 26 Jul 2010 13:22:09 -0400
Eric Chomko <pne.chomko@comcast.net> writes:
When was your father-in-law(?) at West Point? USMA was the first place I ever saw, touched and used a computer.

before WW2 ... in fact he was then at berkeley before WW2. after ww2 he was then posted to nanking as adviser to the general (and he took the family along).
http://digital-library.usma.edu/libmedia/archives/oroc/v1937.pdf

past post mentioning he was awarded set of history books for some distinction
https://www.garlic.com/~lynn/2009m.html#53 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer

recently was at National Archives ... looking up records/reports of the engineer combat group he commanded in ETO (post has quote from one of the reports):
https://www.garlic.com/~lynn/2010i.html#82 Favourite computer history books?

past posts mentioning family going to nanking (family was evacuated on 3hrs notice in army cargo plane to tsing tao ... when nanking was ringed):
https://www.garlic.com/~lynn/2004e.html#19 Message To America's Students: The War, The Draft, Your Future
https://www.garlic.com/~lynn/2005r.html#3 The 8008
https://www.garlic.com/~lynn/2007j.html#86 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#88 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#90 IBM Unionization
https://www.garlic.com/~lynn/2008f.html#58 China overtakes U.S. as top Web market
https://www.garlic.com/~lynn/2009d.html#43 was: Thanks for the SEL32 Reminder, Al!

family lived on uss repose in tsing tao harbor for 3months ... a couple past posts:
https://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#33 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#27 Mount DASD as read-only
https://www.garlic.com/~lynn/2006s.html#44 Universal constants
https://www.garlic.com/~lynn/2008f.html#47 WWII
https://www.garlic.com/~lynn/2008f.html#90 WWII supplies

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?

Some $27T in triple-A rated toxic CDOs provided the fuel for the real estate bubble (similar to BROKERS' LOANS fueling the stock market bubble in the 20s); speculators found no-documentation, no-down, 1% interest-only payment ARMs ... might result in 2000% ROI .... given the real estate inflation in many parts of the country (further fueled by the speculation).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

builders mistaking the speculation bubble for real demand, borrowed to build more housing developments. commercial real estate seeing all the housing developments, borrowed to build more strip malls. towns and cities had to float bonds for all the new streets, sewer treatment, water supply, etc ... services for the new housing developments. there is also all of the public involved in the real estate bubble (analogous to public involvement in the stock market bubble in the 20s).

when the bubble burst ... it wasn't just those holding the real estate (analogous to the 20s bubble) but also those holding the triple-A rated toxic CDOs. The municipal bond market also collapsed (Warren Buffett had to step in and provide municipal bond insurance; how were towns & cities going to pay off all the bonds for new services? w/o the tax revenue from the empty housing developments), how were the commercial developers going to pay off their loans to the local community banks? how were the housing developers going to pay off their loans? this (real estate) bubble had all kinds of secondary effects spreading out into much of the rest of the economy.

i.e. this is somewhat separate from a few too-big-to-fail institutions having bought up significant amount of triple-A rated toxic CDOs ... and carrying them off-balance (although a major reason for paying for the triple-A ratings were to open up the retirement and other funds that only deal in triple-A rated, safe investments) ... even tho some part of those institutions were supposedly regulated, safe, depository institutions (but courtesy of Glass-Steagall repeal, their unregulated investment banking arms could put them at risk of failing).

it is like hollywood movie plot about how foreign powers could pillage the country.

--
virtualization experience starting Jan1968, online at home since Mar1970

C-I-C-S vs KICKS

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: C-I-C-S vs KICKS
Newsgroups: bit.listserv.ibm-main
Date: 26 Jul 2010 11:12:52 -0700
pacemainline@GMAIL.COM (Mark Pace) writes:
I had an SE many years ago that did say S-N-A as SNAH. Confused me every time. I've never heard anyone try to say R-J-E as a word. What you you use, reggie?

i've heard lots of SNAH ... don't remember RJE as a word ... but do remember CRJE as a word (aka conversational remote job entry ... cre-jee)

as undergraduate i had added tty/ascii terminal support to cp67. the original code did automatic terminal identification for 2741 & 1052 and would use the 2702 SAD command to dynamically assign the correct line-scanner to the port (depending on which terminal it decided it was talking to). I tried to add tty/ascii terminal support in similar manner ... which almost worked. The problem was that while the 2702 allowed the line-scanner to be dynamically set (with SAD command) ... the line-speed oscillator was hard-wired (couldn't quite use a common pool of lines with single dial-up number for 2741, 1052 and tty).

This somewhat motivated the univ. to start clone controller processor ... starting out with Interdata/3 ... reverse engineering the channel interface and building channel interface board for the Interdata ... and of course one of the implementation features was to be able to dynamically determine terminal speed. Later four of us was written up as responsible for clone controller business ... some past posts
https://www.garlic.com/~lynn/submain.html#360pcm

Later, clone controller business was major motivation for future system effort (and distraction of future system credited with allowing clone processors to gain market foothold):
https://www.garlic.com/~lynn/submain.html#futuresys

In any case, I also hacked HASP for CRJE implementation ... removed 2780 code (reduce code footprint) and replaced it with 2741 & tty terminal support ... along with editor supporting CMS editor syntax (had to be rewritten from scratch since the CMS environment and HASP environment were/are so different).

misc. past posts mentioning one thing or another about HASP
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hacking -- Fact or Fiction

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 July, 2010
Subject: Mainframe Hacking -- Fact or Fiction
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010l.html#28 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010l.html#37 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010l.html#51 Mainframe Hacking -- Fact or Fiction

when we were out marketing our HA/CMP proudct ... some past posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

.... I had coined the terms geographic survivability and disaster survivability (to add some differentiation to simple disaster/recovery) ... some past posts:
https://www.garlic.com/~lynn/submain.html#available

and I was (also) asked to write a section for the corporate continuous availability strategy document. However, the section got pulled because both Rochester and POK complained (at the time, they didn't have any geographic survivability strategy).

A few years ago, we had some dealings with one of the large financial networks. they were attributing their 100% availability for an extended number of years to:

• IMS hot-standby (triple replicated at geographic distance) • automated operator

... aka, in prior life, my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture and had done Peer-Coupled Shared Data architecture ... some past posts
https://www.garlic.com/~lynn/submain.html#shareddata

she didn't remain very long in the position because at the time, there was little uptake except for IMS hot-standby (until sysplex) ... and there was lots of skirmishes with the communication business unit regarding requirement for using SNA for loosely-coupled operation (temporary truces supposedly allowed her to use anything she wanted for loosely-coupled operation within the confines of the datacenter walls).

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?

Recent report that financial services industry (wall street, rating agencies etc) is now three times larger (as percent of GDP) compared o before the bubble ... for no apparent additional benefit to the economy/country (in fact, being major player in the bubble ... just the opposite) ... and like the wall street bonuses ... doesn't appear to returning to pre-bubble levels. The fees, commissions, bonuses, etc ... on $27T in triple-A rated, toxic CDOs would account for much of that increase.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: A mighty fortress is our PKI
MailingList: Cryptography
On 07/27/2010 10:11 AM, Peter Gutmann wrote:
So a general response to the several "well, what would you do?" questions is "I'm not sure, that's why I posted this to the list". For example should an SSL cert be held to higher standards than the server it's hosted on? In other words if it's easier to compromise a CDN host or (far more likely) a web app on it, does it matter if you're using a Sybil cert? I have no idea, and I'm open to arguments for and against.

long ago and far away, we were called in to consult with a small client/server startup that wanted to do payment transactions on their server ... they had also invented this technology called SSL that they wanted to use. As part of applying the technology to the business payment process ... we also had to go around and investigate how some of these new businesses, calling themselves "Certification Authorities", operated. In any case, the result is now sometimes called "electronic commerce".

There were lots of issues with deficiencies and vulnerabilities, resulting in my coining the term merchant comfort certificates ... aka ... as opposed to anything to do with security. Of course, I also suggested that everybody that in anyway touched on the certificates or the merchant servers ... needed to have detail FBI background check. some past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: A mighty fortress is our PKI
MailingList: Cryptography
On 07/27/2010 12:09 PM, Pat Farrell wrote:
Most of which we avoided by skipping the cert concept. Still, better technology has nothing to do with business success.

Public Key Crypto with out all the cruft of PKI. Its still a good idea.


re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI

that became apparent in the use of SSL between all the merchant servers and the payment gateway. misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

by the time the registration and setup process was completed at both ends ... the certificate was purely an artificial attribute of the crypto library being used. there were other issues with the payment gateway protocol ... i was able to mandate things like mutual authentication ... which didn't exist in the crypto library up to that point ... however the exchange of certificates was so engrained that it wasn't possible to eliminate (even tho all the necessary information already existed at both end-points). past posts referencing to ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

the merchant server/browser part ... I could only recommend ... I couldn't mandate.

my analogy is that certificates & PKI are electronic analogy of the letters of credit/introduction from the sailing ship days ... when the relying party had no other recourse for information about the stranger that they were dealing with. This was left over from the dail-up email days of the early 80s (dial-up electronic post-office, exchange email, hangup, and possibly have first-time email from complete stranger).

that design point was quickly vanishing in the 90s with the pervasive growth of the online internet.

I as at annual ACM sigmod conference in the early 90s ... and one of the big sessions, somebody asked one of the panelists what was all this x.50x gorp about. Eventually somebody explained that it was a bunch of networking engineers attempting to re-invent 1960s database technologies .... with certificates being armored, stand-alone, stale representation of some information from a database someplace. In the later 90s, certificates attempted to find place in no-value market niches (aka, situations involving no-value operations that couldn't justify online &/or real-time information) ... although this got into some conflicts ... trying to address no-value market-niche ... at the same time claiming high-value, expensive operation.

There were businesses cases floated to venture community claiming $20B certificate market ... i.e. that every person in the country would have $100/annum certificate ... some predicting that the financial community would underwrite the cost. When that didn't happen, there were other approaches. We had been called in to help wordsmith the cal. state electronic signature legislation ... which was being heavily lobbied by the PKI industry to mandate certificates. some past posts
https://www.garlic.com/~lynn/subpubkey.html#signature

I could argue that rube-goldberg OCSP was response to interaction I had with some of the participants ... somebody bemoaning the fact that the financial industry needed to be brought into 20th century requiring certificates appended to every financial transaction. I responded that stale, static certificates would be retrenching to before the advent of online, real-time point-of-sale payment transactions ... aka a major step backward, not a step forward.

Besides the appending of a stale, static certificate to every payment transaction being redundant and superfluous ... it also represents enormous overhead bloat. There were some reduced financial, relying-party-only certificates being floated in the mid-90s ... which were still 100 times larger than the typical payment payload size (increase the size of payment transaction payload by a factor of 100 times for no beneficial purpose). misc. past posts mentioning bloat
https://www.garlic.com/~lynn/subpubkey.html#bloat

The X9 financial standard group ... had some participants recognizing the enormous overhead bloat certificates represented in payments ... started a compressed certificate standards activity ... possibly looking to reduce the 100 times overhead bloat to only 5-10 times overhead bloat (although still redundant and superfluous). One of their techniques was that all information that was common in every certificate ... could be eliminated. Then all information that the relying party already had could be eliminated. I was able to trivial show, that a relying party would have access to every piece of information in a certificate ... and therefor digital certificates could be compressed to zero bytes.

Then rather than arguing whether it was mandated that every payment transaction have an appended certificate ... we could mandate that every payment transaction have a zero-byte appended certificate.

disclaimer ... eventually had a couple dozen (assigned, retain no interest) patents in the area of certificate-less public key (some showing up long after we were gone) ... past posts mentioning certificate-less public key
https://www.garlic.com/~lynn/subpubkey.html#certless

patent summary here
https://www.garlic.com/~lynn/aadssummary.htm

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: A mighty fortress is our PKI
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI

On 07/27/2010 12:09 PM, Pat Farrell wrote:
In that same time, I was at CyberCash, we invented what "is now sometimes called "electronic commerce". " and that and $5 will get you a cup of coffee. We predated SSL by a few years. Used RSA768 to protect DES sessions, etc. Usual stuff.

re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI

somewhat as result of doing the SSL payment stuff ... in the mid-90s got invited to be part of the x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the result was x9.59 retail payment financial standard ... which was specified in such a way that it would work with any secure authentication (including allowing both certificate & certificate-less mode). x9.59 standards reference:
https://www.garlic.com/~lynn/x959.html#x959

The business process was slightly tweaked so it was no longer necessary to hide the information in a payment transaction to preserve the financial infrastructure integrity. This didn't eliminate skimming, evesdropping, data breaches ... but it eliminated the ability for the attackers to use the information to perform fraudulent transactions (and effectively also eliminates the major use of SSL in the world ... hiding the information in financial transaction).

About the same time the x9a10 standards work was going on ... there were a couple other payment transaction specification work occurring ... which were mandating certificate operation ... somewhat trying to side-step the 100 times payload bloat. they would strip the certificate at internet gateway ... and forward the transaction thru the standard payment network with flag turned on (they could somewhat wave their hands that 100 times payload bloat on the internet was immaterial ... but not so in the real payment network) that certificate processing had occurred (compared to light-weight, certificate-less, super secure, x9.59 ... which operated end-to-end). There were later some presentations at ISO standards meetings that transactions were showing up with the "certificate" flag on ... but they could prove no certificate had been involved (i.e. there was financial interchange fee benefit motivating turning on the flag).

shortly after they had published their (certificate-based) payment specification (but well before any operational code), I did a public-key op profile for their specification. I then got a friend that had a optimized BSAFE library (ran four times faster) to benchmark the profile on lots of different platforms ... and then reported the results to the groups publishing the specification. The response was my numbers were 100 times too slow (if they had actually run any numbers, their comment should have been it was four times too fast). Some six months later when they did have pilot code ... my profile numbers were within a couple percent of actual (i.e. the BSAFE library changes had been incorporated into standard distribution).

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#56 Who is Really to Blame for the Financial Crisis?

News just had another spin on the too-big-to-fail financial institutions ... apparently some number have been involved in money laundering operations .... fed gov. stumbled on to it when they followed money trail used to buy several planes (planes used in drug smuggling). the claim is the fed gov. instead of prosecuting, signed an agreement with them that the institutions would stop doing it.

The scenario goes that if they were prosecuted ... not only the executives would go to jail ... but also the gov. would have to revoke their bank charters .... which would have significant downside on the fragile economy ... when the feds were already doing everything possible to try and keep the too-big-to-fail institutions from going under.

they also got tarp funds ... but there is also federal reserve "loans" to federally chartered banks .... the too-big-to-fail institutions were getting the funds at effectively zero percent interest from federal reserve ... and letting their unregulated investment banking arms invest it (before the repeal of Glass-Steagall, about the only use they had for such funds were to actually lend it out). the zero-percent loans for chartered financial institutions were separate from the tarp funds. note that goldman-sachs was given a regulated bank charter (about the same time they got tarp funds) so they could get zero percent money from federal reserve.

one of the things ... was that they could get zero percent from the federal reserve and then buy treasury bonds ... with whatever treasury bonds paying, effectively all profit (only profit limit being the amount of money you could get from federal reserve) It would be very hard for an institution to not make a profit given such an environment.

So another scenario might be having the federal reserve directly provide the treasury a couple trillion dollars ... eliminating the too-big-to-fail institutions sitting in the middle (and crediting the profit on their books).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Slang terms

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 July, 2010
Subject: Mainframe Slang terms
Blog: MainframeZone
HASP predates MVT ... I installed/gened it on Release 11 with MFT ... MVT appeared in Release 12 ... but I don't know anybody actually running it until 13 ... and I continued gen'ing MFT until (combined) Release 15/16. misc. past posts mentioning HASP
https://www.garlic.com/~lynn/submain.html#hasp

I thot Houston had five Mod 75s ... have some memory of NASA reference at spring '68 SHARE meeting in Houston(?)

disclaimer: my wife did stint in the g'burg JES group (before being con'ed into going to POK to be in charge of loosely-coupled architecture) ... was a catcher for ASP turning into JES3 .... and also co-author of JESUS, the specification that combined the best/required features from JES2 & JES3 (which ran into issues because there were those that formed into two strongly divided camps).

In box someplace, I still have old orange cover HASP song book.

wiki reference to HASP starting out as "Type-III" library.
https://en.wikipedia.org/wiki/Houston_Automated_Spooling_Program

there is also ibmjargon ... ibmjargon mentioned here:
https://en.wikipedia.org/wiki/Mike_Cowlishaw

search engine points to copy here:
http://www.comlay.net/ibmjarg.pdf

I have an early copy that was turned into special file for the 6670 device driver ... where it would randomly select an entry from the jargon file to print on the output separator page.

One of the entries in the above is an oblique reference to getting blamed for computer conferencing on the internal network in the late 70s and early 80s ... and a resulting article in Datamation.

recent post in another mainframezone discussion on mainframe hacking ... also archived here
https://www.garlic.com/~lynn/2010l.html#51

mentions getting brought in to help getting BCS going ... and periodic visits to renton datacenter ... renton supposedly had $300m in ibm mainframes (about $1.8m in today's dollars). pieces from two or three 360/65s were constantly sitting in halls around the perimeter of the datacenter (waiting to be installed). there was also at least one 360/75 in the room used for classified work ... perimeter was marked off and patrolled when running classified work ... there was also black opaque cloth that was unrolled to cover the front panel (lights) and windows on 1403 printers.

above discussion also mentions later having sponsored Col. Boyd's briefings at IBM ... and one of boyd's biographies mentions that he did a tour in command.of.spook base (about the same time i was at BCS) .... and that spook base was a $2.5B windfall for IBM (about $15B in today's dollars).

univ. had a 709 w/ibsys running jobs sequentially tape->tape (with 1401 "front-end", handling tape<->unit-record, tapes were manually moved between 1401 & 709); student fortran jobs were sub-second elapsed time.

univ. replaced 709/1401 with 360/67 to run tss/360 ... because of problems/availability of tss/360, the 360/67 spent most of the time running as 360/65 with os/360. os/360 release 9.5 running student fortran jobs (3-step G complie/link/go) w/o hasp ran minutes+. os/360 was extremely disk intensive attempting to conserve real storage.

vanilla os/360 mft release 11 with hasp got student job elapsed time under a minute (still up from subsecond on 709). I did carefully, hand-crafted os/360 release 11 stage-2 sysgen that reorder statements to achieve optimal order of files and PDS members on disk (to minimize arm seek distances) ... which achieved 300% thruput improvement (hand-crafted mft release 11 w/hasp compared to vanilla mft release 11 w/hasp).

jan68, 3 people from science center installed cp67 at the univ. (as alternative to tss/360). cp67 never achieved standard production work at the univ. ... but i got to play with it a lot on the weekends ... significantly rewriting lots of the code. At fall '68 SHARE meeting in boston, i gave presentation on both the os/360 mft w/hasp (by this time release 14) optimization as well as the cp67 rewrite work. There were four cases, vanilla os/360 mft14 (bare hardware), optimized os/360 mft14 (bare hardware), optimized os/360 mft14 on original cp67, optimized os/360 mft14 on highly optimized cp67.

old posting with piece of that fall '68 SHARE presentation:
https://www.garlic.com/~lynn/94.html#18

for other drift ... recent post mentioning modifying HASP to add 2741 & tty/ascii terminal support along with editor supporting cms edit syntax (rewritten from scratch since cms environment and hasp environment was so different):
https://www.garlic.com/~lynn/2010l.html#54

student job elapsed time never got back to 709 thruput until after univ. installed WATFOR (later on os/360 mft14 w/hasp).

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 July, 2010
Subject: A mighty fortress is our PKI
MailingList: Cryptography
On 07/28/2010 12:10 AM, Paul Tiemann wrote:
I like the idea of SSL pinning, but could it be improved if statistics were kept long-term (how many times I've visited this site and how many times it's had certificate X, but today it has certificate Y from a different issuer and certificate X wasn't even near its expiration date...)

Another thought: Maybe this has been thought of before, but what about emulating the Sender Policy Framework (SPF) for domains and PKI? Allow each domain to set a DNS TXT record that lists the allowed CA issuers for SSL certificates used on that domain. (Crypto Policy Framework=CPF?)

cpf.digicert.com IN TXT "v=cpf1 /^DigiCert/ -all"

Get the top 5 browsers to support it, and a lot of that "any CA can issue to any domain" risk goes way down.

Thought: Could you even list your own root cert there as an http URL, and get Mozilla to give a nicer treatment to your own root certificate in limited scope (inserted into some kind of limited-trust cert store, valid for your domains only)

Is there a reason that opportunistic crypto (no cert required) hasn't been done for https? Would it give too much confidence to people whose DNS is being spoofed?


re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI

Part of SSL was countermeasure to perceived weakness in domain name infrastructure ... is the server that I think I'm talking to really the server I'm talking to (things like ip-address hijacking). Now Certification Authorities typically aren't the authoritative agency for the information they are certifying ... they ask for a whole bunch of information from an SSL certificate applicant and then perform an expensive, time-consuming, and error-prone identification process, x-checking the supplied information with the information on-file at the domain name infrastructure, as to the true owner of a domain (the same domain name infrastructure that has the weaknesses that SSL is designed as countermeasure).

So ... something that could be backed by the Certification Authority industry as part of DNSSEC, is to ask that all domain name applicants also register a public key as part of obtaining a domain name. domain name infrastructure then can require that all subsequent communication be digitally signed ... and can be verified with the onfile public key (as countermeasure to various kinds of domain name hijacking exploits, hijack domain and then apply for valid SSL certificate using dummy front company which matches the corrupted onfile information). The Certification Authority industry then could take advantage of the same infrastructure and require that all SSL domain name certificate applications, also be digitally signed (and can be verified with the onfile public key at the domain name infrastructure); replacing a time-consuming, expensive, error-prone identification process with an efficient, inexpensive, reliable authentication process.

The catch-22 for the industry is if the Certification Authority industry could start doing real-time, online retrieval of public keys for authentication ... then maybe the rest of the world might also ... changing SSL to a certificate-less, real-time, online publickey infrastructure.

One of the possible reasons that it hasn't happened is there no startup, venture capital, IPO ... etc, gorp associated with such an incremental enhancement to the existing domain name infrastructure (it is a pure security/integrity play with no big financial motivation for anybody). W/o startup, venture capital, IPO play ... there is no big marketing budget to blitz the public on how much more comforting things would be (i.e. part of the reason that I coined the term merchant comfort certificates back in the early days). In the late 90s, we got visited by somebody that wanted to explain about the downside our comments could have on some pending Certification Authority IPO (much of internet hype from the period was actually part of IPO-mill money generating machine).

I've posted frequently in the past about the catch-22 scenario for the certification authority industry.

disclaimer: the inventor of domain name infrastructure did a stint at the science center a decade earlier ... working on various and sundry projects.

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part II

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 July, 2010
Subject: A mighty fortress is our PKI, Part II
MailingList: Cryptography
On 07/28/2010 10:05 AM, Perry E. Metzger wrote:
I will point out that many security systems, like Kerberos, DNSSEC and SSH, appear to get along with no conventional notion of revocation at all.

re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI

long ago and far away ... one of the tasks we had was to periodically go by project athena to "audit" various activities ... including Kerberos. The original PK-INIT for kerberos was effectively certificate-less public key ... aka replace registering a shared-secret password (for authentication) with a public key. There was then some amount of lobbying by the certification authority interests for pk-init to include certificate-based mode of operation (I wrote the draft-words for PK-INIT for inclusion of certificate-less ecdsa). misc. past posts mentioning certificate-less publickey KERBEROS
https://www.garlic.com/~lynn/subpubkey.html#kerberos

An issue with Kerberos (as well as RADIUS ... another major authentication mechanism) ... is that account-based operation is integral to its operation ... unless one is willing to go to a strictly certificate-only mode ... where all information about an individuals authority and access privileges are also carried in the certificate (and eliminate the account records totally). misc. past posts mentioning certificate-less publickey RADIUS
https://www.garlic.com/~lynn/subpubkey.html#radius

As long as the account record has to be accessed as part of the process ... the certificate remains purely redundant and superfluous (in fact, some number of operations running large Kerberos based infrastructure have come to realize that they have large redundant administrative activity maintaining both the account-based information as well as the duplicate PKI certificate-based information).

The account-based operations have sense of revocation by updating the account-based records. This can be done in real-time and at much finer levels of granularity than the primitive, brute-force (PKI) revocation (and replacement). For instance, have you gone over your outstanding balance or credit-limit? ... are you up-to-date with you ISP account? ... or should it just be temporarily suspended bending receipt of funds. Account records can carry other kinds of real-time information ... like whether currently logged on ... and should duplicate, simultaneous logons be prevented (difficult to achieve with redundant and superfluous, stale, static certificates).

The higher-value operations tend to be able to justify the real-time, higher quality, and finer grain information provided by an account-based infrastructure ... and as internet and technology has reduced the costs and increased pervasiveness of such operations ... it pushes PKI, certificate-based mode of operation further and further into no-value market niches.

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part II

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 July, 2010
Subject: A mighty fortress is our PKI, Part II
MailingList: Cryptography
On 07/28/2010 11:05 AM, Nicolas Williams wrote:
Are you arguing for Kerberos for Internet-scale deployment? Or simply for PKI with rp-only certs and OCSP? Or other "federated" authentication mechanism? Or all of the above?

re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II

as i've mentioned ... the relying-party-only certificates are almost always redundant and superfluous ... except in cases where the relying party can't justify their own repository of information and/or distributed access to such a repository of information.

I previously mentioned that in the payment transaction case, even a relying-party-only certificate was a factor of 100-times payload size bloat for typical payment transactions ... aka not only was the certificate redundant and superfluous ... but it represented an enormous (redundant and superfluous) processing burden.

I've mentioned a number of times that OCSP appeared after I had repeatedly ridiculed revokation process being archaic backwards step for real-time payment processes. And that even OCSP (with a certificate) is still redundant and superfluous when real-time transaction is being performed using the "real" information.

the other scenario for rpo-certs ... besides for no-value operations ... is when the real infrastructure is down and/or not accessible. But that usually is matter of cost also, some of the higher-value operations have gone to significant redundancy and claim 100% availability. The certificate analogy is still the letters of credit/introduction from sailing ship days ... when the relying-party had no (other) access to information for first time interaction with complete stranger (and has to fall back to much cruder and lower quality information).

There is also some scenario if the respository and the service are co-located ... that when the repository is unavailable the service will also be unavailable ... so there is no requirement for independent source of information.

The catch-22 for certification authority operation ... is that as they move further & further into the no-value market niches (and/or market niches that can't justify the expense of higher quality operation with real-time repository) ... they are forced to cut their fees and indirectly the quality of their operation.

--
virtualization experience starting Jan1968, online at home since Mar1970

the Federal Reserve, was Re: Snow White and the Seven Dwarfs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the Federal Reserve, was Re: Snow White and the Seven Dwarfs
Newsgroups: alt.folklore.computers
Date: Wed, 28 Jul 2010 11:15:00 -0400
jmfbahciv <See.above@aol.com> writes:
It is worse than that. Congress just passed a bill which is a blank check for annonymous "regulators". Who are these people? How many of them are the 1000 members which Lynn refers to as the original mess makers?

i think the wharton reference to 1000 ... mostly are those that use money to heavily influence who is selected as regulators and what they are allowed to do (things like decisions about what is to be carried off-balance and therefor are not subject to regulation). a couple recent posts mentioning the wharton article that estimated 1000 are responsible for 80% of the financial mess.
https://www.garlic.com/~lynn/2010c.html#32 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#54 The 2010 Census
https://www.garlic.com/~lynn/2010h.html#22 In the News: SEC storms the 'Castle'
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?

there were some number of things in the original basel-II draft that disappeared during the review process (in large part influenced by US financial institutions, european and others were initially in favor). Some number re-appeared in basel-III draft ... but are possibly likely to disappear again. basel (aka BIS; bank for international settlements)
http://www.bis.org/

there is news item from yesterday about how fed is dealing with some of the too-big-to-fail financial institutions involved in (illegal drug) money laundering. if prosecuted the executives would go to jail and the bank charter would have to be revoked. since the feds are doing everything possible to keep those institutions from going under ... supposedly they just asked the institutions to sign something promising to stop the money laundering. the story has the feds tripping over the too-big-to-fail institutions involvement when they followed the money trail from purchase of airplanes used in illegal drug smuggling.

--
virtualization experience starting Jan1968, online at home since Mar1970

the Federal Reserve, was Re: Snow White and the Seven Dwarfs

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the Federal Reserve, was Re: Snow White and the Seven Dwarfs
Newsgroups: alt.folklore.computers
Date: Thu, 29 Jul 2010 08:26:50 -0400
sidd <sidd@situ.com> writes:
Basel has been castrated by the usual suspects. For example:

"The Basel package also includes a proposal to ensure a bank has enough long-term liquidity, known as the net stable funding ratio, which has now been diluted so that it won't take full effect until the start of 2018, five years later than expected."

http://www.reuters.com/article/idUSTRE66Q4QN20100727


re:
https://www.garlic.com/~lynn/2010l.html#65 the Federal Reserve, was Re: Snow White and the Seven Dwarfs

and ...

Required Intellectual Capital
http://baselinescenario.com/2010/07/29/required-intellectual-capital/

from above:
But Basel has some under great pressure from the banking lobby, which argues that any increase in capital requirements would limit lending and slow global growth (see this useful background by Doug Elliott). The Institute of International Finance (IIF) -- a lobby group for big banks -- issued an influential 'report' along these lines and the European stress test results strongly suggest that Euroland politicians do not want to press more capital into their financial system -- 'just enough' would be fine with them.

... snip ...

part of the scenario ... was that with repeal of Glass-Steagall ... and being able to have unregulated investment banking arms ... banks could do their own investing ... instead of lending. one of the reform issues is whether they have to shutdown their proprietary trading desks.

The Volcker Principles Move Closer To Practice
http://baselinescenario.com/2010/03/10/the-volcker-principles-move-closer-to-practice/
At Banks, Redefining Proprietary Trading?
http://dealbook.blogs.nytimes.com/2010/07/06/at-banks-redefining-proprietary-trading/ Wall Street Reform and Consumer Protection Act: Who gets reformed and who gets protected?
http://prairieweather.typepad.com/the_scribe/2010/07/wall-street-reform-and-consumer-protection-act-who-gets-reformed-and-who-gets-protected.html
The Volcker Rule
http://www.newyorker.com/reporting/2010/07/26/100726fa_fact_cassidy
Congress Passes Sweeping Financial Reforms
http://globaleconomy.foreignpolicyblogs.com/tag/proprietary-trading/

why lend when you can get zero percent money from the fed and do your own investing ... rather than lending it out.

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part II

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 July, 2010
Subject: A mighty fortress is our PKI, Part II
MailingList: Cryptography
On 07/28/2010 11:52 PM, Pat Farrell wrote:
I'd like to build on this and make a more fundamental change. The concept of a revocation cert/message was based on the standard practices for things like stolen credit cards in the early 1990s. At the time, the credit card companies published telephone book sized listings of stolen and canceled credit cards. Merchant's had the choice of looking up each card, or accepting a potential for loss.

A lot of the smart card development in the mid-90s and beyond was based on the idea that the smart card, in itself, was the sole authorization token/algorithm/implementation.


re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II

that was one of my points ridiculing PKI in the mid-90s ... that the CRL was a return to offline point-of-sale payment operation ... and seemed to motivate the work on OCSP.

The difference was that in the move to real-time online transactions ... it got much high quality operation ... not only could it establish real-time valid/not-valid ... but also other real-time characteristics like real-time credit limit, recent pattern of transactions, and much more. by comparison, OCSP was an extremely poor man's real-time, online transaction

smartcard payment cards started out being stand-alone stored-value to compensate for the extremely expensive telco and/or limited availability at point-of-sale in much of the world ... aka it was stored-value operation where the operation could be performed purely offline (the incremental cost of the smartcard chip was offset by savings not requiring realtime, online transaction).

The telco economics didn't apply to the US ... as seen by the introduction of "stored-value" magstripe based payment cards in the US that did real-time, online transaction ... which served the same market niche that the offline smartcard was performing in other parts of the world. Between the mid-90s and now, telco costs & connectivity has significantly changed around the world ... pervasive uniquitness of the internet, cellphone coverage, wireless, ... lots of things.

The common scenario in the past couple decades ... was looking to add more & more feature/function to smartcards to find the magical economic justification ... unfortunately, the increase in feature/function tended to also drive cost ... keeping the break even point just out of reach.

Part of the certificate-less public key work was to look at chips as a cost item (rather than profit item ... since lots of the smartcard work was driven by entities looking to profit by smartcard uptake). The challenge was something that had stronger integrity than highest rated smartcard but at effective fully loaded cost below magstripe (i.e. I had joked about taking a $500 milspec part, cost reducing by 3-4 orders of magnitude while improving the integrity). Another criteria was that it had to work within the time & power constraints of a (ISO14443) contactless transit turnstile ... while not sacrificing any integrity & security.

By comparison ... one of the popular payment smartcards from the 90s looked at the transit turnstile issue ... and proposed a "wireless" sleeve for their contact card ... and 15ft electromagnetic "tunnels" on the approach to each transit turnstile ... where public would walk slowly thru the tunnel ... so that the transaction would have completed by the time the turnstile was reached.

Part of achieving lower aggregate cost than magstripe ... was that even after extremely aggressive cost reduction, the unit cost was still 2-3 times that of magstripe ... however, if the issuing frequency could be reduced (for chip)... it was more than recouped (i.e. magstripe unit cost is possibly only 1% of fully loaded issuing costs). Changing the paradigm from institutional-centric (i.e. institution issued) to person-centric (i.e. person uses the same unit for multiple purposes and with multiple institutions) ... saves significant amount more (replaces an issuing model with a registration model).

Turns out supposedly a big issue for a transition from an institution-centric (institution issuing) to person-centric paradigm ... was addressing how can the institution "trust" the unit being registered. Turns out that "trust" issue may have been obfuscation ... after providing a solution to institution trust ... there was continued big push back to moving off an institutional issuing (for less obvious reasons) ... some of the patent stuff (previous mentions) covered steps for moving to person-centric paradigm (along with addressing institutional trust issues). Part of it involved tweaking some of the processes ... going all the way back to while the chip was still part of wafer (in chip manufacturing ... and doing the tweaks in such a way that didn't disrupt standard chip manufacturing ... but at the same time reduced steps/costs).

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#56 Who is Really to Blame for the Financial Crisis?

from earlier this spring when it still wasn't clear how much would be gutted from financial reform

The Volcker Principles Move Closer To Practice
http://baselinescenario.com/2010/03/10/the-volcker-principles-move-closer-to-practice/

from above:

At the Senate Banking Committee hearing on this issue in early February, John Reed -- former head of Citi -- was adamant that a restriction on proprietary trading not only made sense, but was also long overdue. Gerald Corrigan of Goldman Sachs and Barry Zubrow of JP Morgan Chase expressed strong opposition, which suggests that Paul Volcker is onto something.

...

5) My assessment is that if Goldman were around $100 billion in total assets, that would be a reasonable outcome -- although we still have to worry about what they (or anyone) does in the 'dark markets' of over-the-counter derivatives.

... snip ...

Reed was fairly quickly replaced in the citi takeover ... and it was that take-over that also involved repeal of Glass-Steagall ... PBS expose ...

The Wall Street Fix
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 July, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#56 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#68 Who is Really to Blame for the Financial Crisis?

some good part of financial mess involves repeal of Glass-Steagall ... while the actual stuff was funding the unregulated loan originators with toxic CDOs that they paid to get triple-A ratings (the analogy to the 20s mess with BROKERS' LOANS) ... a huge amount of the triple-A rated toxic CDOs eventually show up off-balance sheet of the too-big-to-fail financial institutions (regulated depository institutions ... but acquired by their unregulated investment banking arms ... courtesy of repeal of Glass-Steagall).

supposedly congress "owed" wall street the repeal of Glass-Steagall because of the $250M in contributions (fairly evenly divided between the two parties) ... after it initially passed, the president was poised to veto. the bill then had some number of fairly unrelated provisions added, to "buy" the rest of the votes to make it "veto proof" (i.e. went from vote of 54-44 to 90-8)
https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act

while the amount of lobbying money by wall street has been significant (some claim $5B aggregate in the past decade), it apparently is general corporate lobbying involving the tax code that earns congress the reputation as the "most corrupt institution on earth"

there was TV news segments on annual economists conference ... with segments showing up on youtube (where I saw it). one segment was roundtable of dozen or so discussing flat rate tax. justification for the flat rate tax was based on going a long way to eliminating the enormous amount of associated corruption and improve the overall efficiency of america (although there were snide comments about congress would likely come up with new ways to remain the most corrupt institution on earth).

The scenario was that the current environment has resulted in 65,000 pages of tax code (helps account for the huge disparity between claims that country has the highest corporate tax rate ... but at the same time the percent of tax revenues from corporations has drastically plummeted). Dealing with the complexity of 65,000 page tax code, supposedly costs the country something like 6% of its GDP. The scenario going to flat rate tax eliminates the majority of congressional corruption and reduces the tax code to 400-500 pages ... and theoretically improves country's productivity by the lost 6% (of GDP; going to dealing with the current tax code complexity).

the economist round table segment ended with a semi-humorous note that one of the organizations lobbying against the US flat rate tax was Ireland ... because some number of the companies relocating to Ireland gave the complexity of the US tax code as major motivation.

--
virtualization experience starting Jan1968, online at home since Mar1970

A slight modification of my comments on PKI

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 July, 2010
Subject: A slight modification of my comments on PKI
MailingList: Cryptography
On 07/28/2010 10:34 PM, dan@geer.org wrote:
The design goal for any security system is that the number of failures is small but non-zero, i.e., N>0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes. Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader.

re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II

another design goal for any security system might be security proportional to risk. the major use of SSL in the world today is hiding financial transaction information ... currently mostly credit card transactions. One of the issues is that the value of the transaction information to the merchants (paying for majority of the infrastructure) is the transaction profit ... which can be a dollar or two. The value of the transaction information to the attackers is the associated account limit/balance, which can be several hundred to several thousand dollars. This results in a situation where the attackers can afford to outspend the defenders by 100 times or more.

somewhat because of the work on the current payment transaction infrastructure (involving SSL, by the small client/server startup that had invented SSL), in the mid-90s, we were invited to participate in the x9a10 financial standard working group (which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments). the result was the x9.59 financial transaction standard. Part of the x9.59 financial transaction standard was slightly tweaking the paradigm and eliminating the value of the transaction information to the attackers ... which also eliminates the major use of SSL in the world today (hiding transaction information). It also eliminates the motivation behind the majority of the skimming and data breaches in the world (attempting to obtain financial transaction information for use in performing fraudulent financial transactions). note the x9.59 didn't do anything to prevent attacks on SSL, skimming attacks, data breaches, etc ... it just eliminated the major criminal financial motivation for such attacks.

--
virtualization experience starting Jan1968, online at home since Mar1970

A slight modification of my comments on PKI

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 July, 2010
Subject: A slight modification of my comments on PKI.
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#70 A slight modification of my comments on PKI.

for the fun of it ... from today ...

Twenty-Four More Reasons Not To Trust Your Browser's "Padlock"
http://blogs.forbes.com/firewall/2010/07/29/twenty-four-more-reasons-not-to-trust-your-browsers-padlock/?boxes=Homepagechannels

from above:
On stage at the Black Hat security conference Wednesday, Hansen and Sokol revealed 24 new security issues with SSL and TLS, the digital handshakes that browsers use to assure users they're at a trusted site and that their communication is encrypted against snoops.

... snip ...

adding further fuel to long ago motivation that prompted me to coin the term merchant comfort (ssl digital) certificates. misc. past comments
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

... as an aside, we were tangentially involved in the cal. data breach notification legislation. we had been brought in to help wordsmith the cal. electronic signature act ... and some of the participants were heavily involved in privacy issues. They had done in-depth consumer privacy studies and the number one issue came up "identity theft", namely the "account fraud" form where criminals use account &/or transaction information (from data breaches) to perform fraudulent financial transactions. It appeared that little or nothing was being done about such data breaches ... and they appeared to believe that the publicity from the data breach notifications would motivate corrective action to be taken (and as mention in previous post ... we took a slightly different approach to the problem in the x9.59 financial transaction standard ... eliminating the ability of crooks to use such information for fraudulent transactions).

--
virtualization experience starting Jan1968, online at home since Mar1970

A slight modification of my comments on PKI

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 July, 2010
Subject: A slight modification of my comments on PKI.
MailingList: Cryptography
On 07/30/2010 03:28 AM, Stephan Neuhaus wrote:
This is exactly what we are trying to do in an EU project in which I'm involved. The project, called MASTER, is more concerned with regulatory compliance than security, even though security of course plays a large role.

re:
https://www.garlic.com/~lynn/2010l.html#70 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#71 A slight modification of my comments on PKI

one of the combination scenarios with x9.59 and possible transition to person-centric (besides security proportional to risk) ... financial transaction security was end-to-end ... with the security decisions and responsibility, in large part the two end-points; aka the individual and the individual's financial institutions ... which turn out to also have the biggest interest in the security & integrity of the operations ... and security dependencies on the intermediary parties was drastically reduced.
https://www.garlic.com/~lynn/x959.html#x959

x9.59 had to have a certificate-less mode of operation (aka no PKI) ... in order to have both the integrity & strength for the highest valued transactions while at the same time being lightweight enough to operate end-to-end (eliminating the enormous 100 times payload size & processing bloat that PKI/certificate processing added to standard payment transaction). The combination not only provided the security necessary for the highest valued transaction but also lightweight enough to work within the power & elapsed time constraints of transit turnstile.
https://www.garlic.com/~lynn/subpubkey.html#certless

The possible downside was a lot of vested interest & current infrastructure are essentially providing incremental features to compensate for deficiencies in the current paradigm. Eliminating those deficiencies then also obsoletes many existing features (and could have significant downside to some existing vested interests).

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part II

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 July, 2010
Subject: A mighty fortress is our PKI, Part II
MailingList: Cryptography
On 07/28/2010 11:52 PM, Pat Farrell wrote:
A lot of the smart card development in the mid-90s and beyond was based on the idea that the smart card, in itself, was the sole authorization token/algorithm/implementation.

re:
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II

some ssl, payment, smartcard trivia ...

those smartcards were used for the offline authorization (not just authentication) ... which, in at least one major product, led to the YES CARD ... relatively trivial to skim & replicate a static digital certificate for a counterfeit card ... then the counterfeit card was programmed to answer YES to 1) was the correct PIN entered, 2) should the transaction be performed offline, and 3) was the transaction approved. Once the static digital certificate was skimmed, it was no longer even necessary to know the PIN, since the counterfeit card accepted every possible PIN as valid. misc. past posts mentioning YES CARD
https://www.garlic.com/~lynn/subintegrity.html#yescard

In a 2003, at an ATM Integrity task force meeting ... there was presentation by some LEO explaining the YES CARD ... and how there was little or no countermeasure once a YES CARD was in existence ... somebody in the audience loudly observed that billions were spent on proving smartcards are less secure than magstripe. In the YES CARD timeframe there was even a rather large pilot of the cards in the US ... but seemed to disappear after the YES CARD scenario was publicized (it was actually explained to the people doing the pilot, before the pilot started ... but apparently they didn't appreciate the significance).

much earlier, we had been working on our ha/cmp product and cluster scale-up. we had meeting on cluster scale-up meeting during jan92 sanfran usenet (in ellison's conference room) ... past posts mentioning the jan92 meeting
https://www.garlic.com/~lynn/95.html#13

this was just a few weeks before cluster scale-up was transferred (announced as supercomputer for numerical intensive only) and we were told we couldn't work on anything with more than four processors. some old email from the period on cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

we then leave a couple months later. two of the other people mentioned in the jan92 meeting also leave and show up at small client/server startup responsible for something called "commerce server". we get brought in to consult because they want to do payment transactions on the server ... the small client/server startup has also invented some technology called "SSL" they want to use. The results is now frequently called "electronic commerce".

Then apparently because of the work on electronic commerce ... we also get invited to participate in the x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments.

About the same time there is a pilot program for magstripe-based online stored-value cards (uses existing POS magstripe terminals but the payment network routes the transactions to different backend processor, original program of its kind in the US). At the time, the US didn't have the telco connectivity availability and cost issues that many places in the rest of the world were dealing with ... and therefor didn't have that requirement to move to offline smartcard payment paradigm. However, it turns out their backend, high-availability, no-single-point-of-failure platform developed a glitch ... and even tho it was from a different vendor (than our ha/cmp product), we were asked to investigate the various failure modes.

Somewhat as a result of all of the above, when one of the major offline, smartcard, european, stored-value payment operators was looking at making an entry into the US in the 90s ... we were asked to design, size, and cost their backend dataprocessing infrastructure. Along the way, we took an indepth look at the business process and cost structure of such payment products. Turns out that the major financial motivation for that generation of smartcard stored-value payment products ... was that the operators got to keep the float on the value resident in the stored-value cards. Not too long later ... several of the major european central banks announced that the smartcard, stored-value operators would have to start paying interest on value in the smartcards (eliminating the float financial incentive to those operators). It wasn't too long after that, most of the programs disappeared.

The major difference between that generation of smartcard payment products and the AADS chip strawman ... was that rather than attempting to be a complex, loadable, multi-function issuer card .... the objective was changed to being a person-centric, highest-possible integrity, lowest-possible cost, hard-to-counterfeit authentication ... which could be registered (publickey) for arbitrary number of different environments (something you have authentication registered in manner analogous to how something you are biometric might be registered).
https://www.garlic.com/~lynn/x959.html#aads

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC History

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 July, 2010
Subject: CSC History
Blog: Cambridge Scientific Center Alumni
July 31, 1992 was the last good "buy out" to leave (paid, with leave-of-absence "bridge" 30 yrs ... as needed). There were others that took it also.

One of the early network applications was joint development with endicott for providing 370 virtual machines in CP67. Multi-level CMS source updates was part of this. The "H" level updates was to the cambridge production CP67 system providing 370 virtual machines (in addition to 360/67 virtual machines). Then the "I" level updates produced a cp67 that ran on 370 architecture (instead of 360/67 architecture). CP67-I was running regularly in CP67-H virtual machine ... for a year before the first 370 engineering machine with hardware virtual memory existed (an 370/145 in endicott). In part because Cambridge system was providing online access to non-employees (mostly students and others from educational institutions in the Boston area), there were requirement to not let information about unannounced 370 virtual memory leak out. As a result the production cambridge CP67 (w/o the H-updates) ran on the real hardware, then CP67-H ran in a 360/67 virtual machine, with CP67-I running under CP67-H, in a 370 virtual machine (with CMS running under that, slow because of the nested virtual machines).

After 370/145 machines with virtual memory support became available internally, two people from San Jose came out to Cambridge and added 2305, 3330, & RPS support to CP67-I ... this was sometimes referred to as CP/SJ (for San Jose). In '92 I had a wing of the old Los Gatos lab ... and had let one of the two people responsible for CP/SJ have one of the offices. The same date (July 31, 1992), that my wife and I took early out from IBM ... he also took the early out.

Because of being able to offend lots of people (dating back to days at the Cambridge Science Center), I was repeatedly told I had no career at the company and couldn't expect promotions. For some strange reason, when I got home on the 31st, there was a letter waiting (at home) saying that I was promoted effectively the following day (first day of leave-of-absence, one of the conditions of the leave-of-absence was also signing something saying I wouldn't come back). another reason for taking the leave-of-absence buyout ... was that I had accumulated a year of vacation time ... besides the "leave" buyout ... I also got paid for the accumulated vacation time (if I had stayed, a new policy would start making all that accumulated vacation time disappear).

I mentioned before, one of the things we were funding in our HA/CMP product was CLaM (dating back to when it was three people). After Science Center closed, CLaM moved into its space at 101 Main st. We continued to have numerous meetings in 101 Main st (after CLaM took over the space) ... even after we left in Jul92 ... CLaM hired us on as consultants.

misc. past posts mentioning CSC (at 545tech sq)
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Location of first programmable computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Location of first programmable computer
Newsgroups: alt.folklore.computers
Date: Fri, 30 Jul 2010 15:06:05 -0400
Walter Bushell <proto@panix.com> writes:
But self modifying code is generally considered harmful. The DEC-20 had a concept of pure code segments which could not be modified and hence didn't need to be written out when being swapped out.

Self modifying code is not only too hard to debug, but is also hard to optimize, for example, causing lookahead to fail. Perhaps it has a place on *very* small machines.


original virtual memory architecture for 370 included segment R/O protection option ... which supported hardware preventing stores (code or data). virtual memory wasn't available at initial 370 shipments, and the retrofit of virtual memory hardware to 370/165 ran into scheduling problems ... and several features were dropped (including segment R/O protection).

vm370/cms had already been re-organized (from cp67/cms) to take advantage of 370 hardware segment protection for segment sharing. when segment protect was dropped, there was a series of kludges invented to prevent one process from corrupting segments that were also being used/shared by other processes.

risc with havard architecture ... separate (inconsistent) I&D caches. Some kinds of program loading can require fiddling the program image ... which would appear in the D-cache ... but not the I-cache. Program loading then needs feature which would force cache-lines to storage (in case of store-in cache changed cache lines) and invalidate any corresponding cache lines from the I-cache (so subsequent I-fetches would have a miss, a force a I-cache line fetch from storage).

For a long time, allowing the possibility that a current instruction might modify the immediate following instruction was considered to slow-down 360/370 by possibly a factor of two (constantly checking that a store wasn't to a location already in the i-fetch/execute process).

There was some corresponding speedup demostrations of 370 code where self-modifying wasn't supported (modulo the risc/harvard scenario).

lots of cms used applications, compilers and conventions brought over from os/360. I had done a lot of enhancements for page mapped filesystem and shared segments. One of the features of os/360 conventions was a lot of program image fiddling when it was first loaded. This resulted in all sorts of problems attempting to simply mapping a processes shared segment to image in the filesystem ... I had to invent various kinds of kludge work-arounds ... lots of past posts mentioning kludge work-arounds for the problems
https://www.garlic.com/~lynn/submain.html#adcon

other posts mentioning page-mapped filesystem work for cms
https://www.garlic.com/~lynn/submain.html#mmap

note that tss/360 ... the official corporate virtual memory operating system for 360/67 ... did do an executable program image convention that allowed straight-forward mapping to executable image in filesystem (w/o the fiddling done in the os/360 paradigm).

however, the tss/360 infrastructure was otherwise extremely heavy-weight and extremely slow. cp67/cms (on 360/67) running 35 users doing fortran program edit, compile and execute (using os/360 fortran g) had better performance, thruput and response than tss/360 on the same identical hardware with only four users (doing the equivalent operations).

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 30 Jul 2010 16:06:34 -0700
eamacneil@YAHOO.CA (Ted MacNEIL) writes:
I think that was a good thing. I was one of the ones, in Canada, complaining about the constant changes in geometry. 3330->3350->3380->3390 (and don't forget 'compatability' mode.

This impacted productivity, migration, space (at a time it mattered), and storage management in general.


one of the advantages to have supported FBA & 3370 ... where all of that has been parameterised ... also would have eliminated having to do the first CKD emulation on real FBA device ... aka coming out with 3375 on 3370FBA ... in order to provide MVS with mid-range market disk ... part of attempting to open up some part of the rapidly expanding mid-range market to MVS.

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 31 July, 2010
Subject: Five Theses on Security Protocols
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#70 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#72 A slight modification of my comments on PKI

corollary to security proportional to risk is parametrized risk management ... where variety of technologies, with varying integrity levels, can co-exist within the same infrastructure/framework. transactions exceeding particularly technology risk/integrity threshold may still be approved given various compensating processes are invoked (allows for multi-decade infrastructure operation w/o traumatic dislocation moving from technology to technology as well as multi-technology co-existence).

in the past I had brought this up to the people defining V3 extensions ... early in their process ... and they offered to let me do the work defining a V3 integrity level field. My response was why bother with stale, static information when real valued operations would use much more capable dynamic, realtime, online process.

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 31 July, 2010
Subject: Five Theses on Security Protocols
MailingList: Cryptography
On 07/31/2010 01:30 PM, Guus Sliepen wrote:
But, if you query an online database, how do you authenticate its answer? If you use a key for that or SSL certificate, I see a chicken-and-egg problem.

re:
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#72 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#77 Five Theses on Security Protocols

Part of what is now referred to as "electronic commerce" is a payment gateway that sits between the internet and the payment networks. this small client/server startup that wanted to do payment transactions and had invented this technology called SSL, wanted to also use SSL for internet communication between the merchant servers and the payment gateway (as well as between browsers and merchant servers). One of the things that I mandated for the merchant servers & payment gateway was mutual authentication (wasn't part of the implementation up until then). By the time all required registration and configuration operations were done for both the merchant servers and the payment gateway ... it was apparent that SSL digital certificates were redundant and superfluous ... and purely an artificial side-effect of the software library being used. misc. past posts mentioning electronic commerce gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

The existing SSL digital certificates has a chicken-and-egg problem as to public key trusted repository for the authorized Certificate Authorities ... aka it requires a trusted repository of Certification Authority public keys in order to validate acceptable SSL digital certificates (as mentioned elsewhere, the infrastructure is vulnerable since all entries in the trusted repository are treated as equivalent; i.e. only as strong as its weakest Certification Authority ... aka the weakest link in the security chain scenario).

If the relying party has its own public key trusted repository and/or has trusted communication to a public key trusted repository then it can use public keys from the trusted repository. In fact, the whole PKI infrastructure collapses w/o relying parties having public key trusted repository (for at least the public keys of trusted Certification Authorities).
https://www.garlic.com/~lynn/subpubkey.html#certless

In that sense, PKI is just a restricted, special case of relying party public key trusted repository ... where the (special case Certification Authority) trusted public keys, in addition to providing "direct" trust, are then used to establish indirect trust for public keys belonging to complete strangers in first time (no-value) communication.

For at least the first decade or so, the major world-wide use of SSL for electronic commerce ... was quite skewed ... with top 100 or so merchant servers accounting for the majority of all electronic commerce transactions. Collecting and distributing those (few) public keys (in manner similar to the way that Certification Authority public keys are collected and distributed), would satisfy the majority of all trusted electronic commerce. Then volume starts to drop off quite quickly ... so there are possibly million or more websites that have electronic commerce activity that could possibly justify spending $10 for the highest possible integrity SSL digital signature.

The SSL Certification Authority operations started out having a severe catch-22. A major objective for SSL was countermeasures to various vulnerabilities in the domain name infrastructure and things like ip-address take-over (MITM-attacks, etc; is the webserver that I think I'm talking to, really the webserver that I'm talking to). Certificate Authorities can typically require a lot of information from an applicant and then they do an error-prone, time-consuming, and expensive identification process attempting to match the supplied information against the on-file information at the domain name infrastructure, as to the true owner of the domain. There have been "domain name take-over" attacks against the domain name infrastructure ... the attacker then could use a front company to apply for an SSL certificate (certificate authority shopping ... analogous to some of the things in the news associated with the financial mess with regulator shopping). Any issued certificate will be taken as equivalent to the highest quality and most expensive certificate from any other Certification Authority.

So part of some Certification Authority backed integrity improvements to the domain name infrastructure ... is to have domain name owners register a public key with the domain name infrastructure ... and then all future communication is digitally signed (and validated with the certificate-less, onfile public key) ... as countermeasure to various things like domain name hijacking (also eliminating some of the exploits where wrong people can get valid SSL certificates).

Turns out then the Certification Authority business could require that SSL digital certificate applications are also digitally signed. The Certification Authority then could do a real-time retrieval of the onfile public key to validate the digital signature (replacing the time-consuming, error-prone, and expensive identification matching process with an efficient, reliable, inexpensive authentication process). The (catch-22) issue for the SSL Certification Authority industry, is if it starts basing its whole SSL digital certificate infrastructure on real-time certificate-less public keys ... the rest of the world might think it was good enough, and start doing the same thing.
https://www.garlic.com/~lynn/subpubkey.html#catch22

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 1 Aug, 2010
Subject: Five Theses on Security Protocols
MailingList: Cryptography
On 07/31/2010 08:37 PM, Jeffrey I. Schiller wrote:
In general I agree with you, in particular when the task at hand is authenticating individuals (or more to the point, Joe Sixpack). However the use case of certificates for websites has worked out pretty well (from a purely practical standpoint). The site owner has to protect their key, because as you say, revocation is pretty much non-existent.

re:
https://www.garlic.com/~lynn/2010l.html#77 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#78 Five Theses on Security Protocols

The publicity campaign for SSL digital certificates and why consumers should feel good about them was major reason that long & ago and far away, I coined the term merchant comfort certificates.
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

Part of what was recognized by the x9a10 financial standard working group (and the resulting x9.59 financial standard) was that relying on the merchant (and/or the transaction processor) to provide major integrity protection for financial transactions ... is placing the responsibility on the entities with the least financial interest ... the security proportional to risk scenario (where largest percentage of exploits occur in the current infrastructure ... including data breaches)
https://www.garlic.com/~lynn/subintegrity.html#secrets

The current payment paradigm has the merchant financial interest in the transaction information is the profit on the transaction ... which can be a couple dollars (and transaction processor profit can be a couple cents on the transaction). By comparison (in the current paradigm), the crooks financial motivation in the transaction information, is the account credit limit (or account balance) which can be several hundred to several thousand dollars ... as a result, the crooks attacking the system, can frequently afford to outspend the defenders by two orders of magnitude (or more).

The majority of fraud (in the current infrastructure) also contributed to retailers having significant "fraud" surcharges as part of their interchange fees. Past crypto mailing list threads have discussed that financial infrastructures make a significant percent of their profit/bottom-line from these "fraud surcharges" (large US issuing financial institutions having made 40-60% of their bottom line from these fees) ... with interchange fee "fraud surcharges" for highest risk transactions being order-of-magnitude or more larger than for lowest risk transactions.

The work on x9.59 financial standard recognized this dichotomy and slightly tweaked the paradigm ... eliminating knowledge of account number and/or information from previous transactions as a risk. This would significantly decrease the fraud for all x9.59 transactions in the world (i.e. the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments; point-of-sale, face-to-face, unattended, internet, debit, credit, stored-value, high-value, low-value, transit turnstile, cardholder-not-present; aka ALL). As a result, it also eliminates the major use of SSL in the world today ... hiding financial transaction information. It also eliminates other kinds of risks from things like data breaches (didn't eliminate data breaches, but eliminated the motivation behind the majority of breaches in the world today, being able to use the information for fraudulent financial transaction).
https://www.garlic.com/~lynn/subintegrity.html#harvest

The downside, is with the elimination of all that fraud ... it eliminates the majority of "fraud surcharge" from interchange fees ... and potentially cuts the "interchange fee" bottom line for large issuing institutions from 40-60% to possibly 4-6%. It sort of could be viewed as commoditizing payment transactions.

A decade ago, there were a number of "secure" payment transaction products floated for the internet ... with significant upfront merchant interest ... assuming that the associated transactions would have significant lower interchange fees (because of the elimination of "fraud" surcharge). Then things went thru a period of cognitive dissonance when financial institutions tried to explain why these transactions should have a higher interchange fee ... than the highest "fraud surchange" interchange fees. The severity of the cognitive dissonance between the merchants and the financial institutions over whether "secure" payment transactions products should result in higher fees or lower fees contributed significantly to the products not being deployed.

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Sun, 01 Aug 2010 10:37:53 -0400
greymausg writes:
There was the story about the World Trade Center computers that were backed up to another computer in the building.

Reminds me of a story in, I think, the Arabian Nights, where some wizard had a store of vital data hidden in a chest, which was hidden in a cave in a remote island..


there was a major disaster/recovery backup datacenter service ... i think on the 5th flr ... that had previously been taken out by the explosion they had in occurred the garage in the 90s.

major east coast datacenter (located across the river) for ATM cash machines (aka lots of institutions have operations outsourced) then had its roof collapse in a severe snow storm ... and they hadn't gotten around to providing for an alternate backup center. took a few days before they had alternate center created and operational.

when we were out marketing our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

I coined the term disaster survivability and "graphic survivability" to differentiate from straight-forward disaster/recovery. During this period we had meetings with some number of institutions in &/or around manhatten ... including some that were in the towers.
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Aug, 2010
Subject: A mighty fortress is our PKI
MailingList: Cryptography
On 07/28/2010 08:55 AM, Anne & Lynn Wheeler wrote:
disclaimer: the inventor of domain name infrastructure did a stint at the science center a decade earlier ... working on various and sundry projects.

re:
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI

other public key & science center trivia; former RSA CEO also at science center ... following recent entry from his blog:
http://smartphonestechnologyandbusinessapps.blogspot.com/2010/05/bob-creasy-invented-virtual-machines-on.html

lots of past posts mentioning science center, 4th flr, 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

a couple old emails from 1981 ... discussing a certificate-less, PGP-like implementation for the internal network
https://www.garlic.com/~lynn/2007d.html#email810506
https://www.garlic.com/~lynn/2006w.html#email810515

... aka the internal network was larger than the arpanet/internet from just about the beginning until sometime late '85 or early '86. one big difference from arpanet/internet was corporation required all links to be encrypted ... and in the mid-80s there was the claim that the internal network had over half of all hardware link encryptors in the world ... only practical solution at the time. I was running multiple T1 links in the period ... and DES-encryption processing for sustained full-duplex traffic from a single T1 link was more than enough to consume multiple mainframe processors. old email on the subject (regarding doing some benchmarking of DES software encrypt/decrypt)
https://www.garlic.com/~lynn/2006n.html#email841115

past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Aug, 2010
Subject: Re: Five Theses on Security Protocols
MailingList: Cryptography
On 08/01/2010 01:51 PM, Jeffrey I. Schiller wrote:
I remember them well. Indeed these protocols, presumably you are talking about Secure Electronic Transactions (SET), were a major improvement over SSL, but adoption was killed by not only failing the give the merchants a break on the fraud surcharge, but also requiring the merchants to pick up the up-front cost of upgrading all of their systems to use these new protocols. And there was the risk that it would turn off consumers because it required the consumers setup credentials ahead of time. So if a customer arrived at my SET protected store-front, they might not be able to make a purchase if they had not already setup their credentials. Many would just go to a competitor that doesn't require SET rather then establish the credentials.

re:
https://www.garlic.com/~lynn/2010l.html#77 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#78 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#79 Five Theses on Security Protocols

SET specification predated these (as also internet specific, from the mid-90s; went on currently with x9a10 financial standards work ... which had requirement to preserve the integrity for ALL retail payments) ... the decade past efforts that were later, were much simpler and practical ... and tended to be various kinds of something you have authentication. I'm unaware of any publicity and/or knowledge about these payment products (from a decade ago) outside the payment industry and select high volume merchants.

The mid-90s, PKI/certificate-based specifications tended to hide behind a large amount of complexity ... and provide no effective additional benefit over & above SSL (aka with all the additional complexity ... did little more than hide the transaction during transit on the internet). They also would strip all the PKI gorp off at the Internet boundary (because of the 100 times payload size and processing bloat that the certificate processing represented) and send the transaction thru the payment network with just a flag indicating that certificate processing had occurred (end-to-end security was not feasible). Various past posts mentioning the 100 times payload size and processing bloat that certificates added to typical payment transactions
https://www.garlic.com/~lynn/subpubkey.html#bloat

In the time-frame of some of the pilots, there were then presentation by payment network business people at ISO standards meetings that they were seeing transactions come thru the network with the "certificate processed" flag on ... but could prove that no certificate processing actually occurred (there was financial motivation to lie since turning the flag on lowered the interchange fee).

The certificate processing overhead also further increased the merchant processing overhead ... in large part responsible for the low uptake ... even with some benefit of lowered interchange fee. The associations looked at providing additional incentive (somewhat similar to more recent point-of-sale, hardware token incentives in europe), effectively changing the burden of proof in dispute (rather than the merchant having to prove the consumer was at fault, the consumer would have to prove they weren't at fault; of course this would have met with some difficulty in the US with regard to regulation-E).

Old past thread interchange with members of that specification team, regarding the specification was (effectively) never intended to do more than hide the transaction during transnmission:
https://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was re: crypto flaw in secure mail standards

aka high-overhead and convoluted, complex processing of the specification provided little practical added benefit over and above what was already being provided by SSL.

oblique reference to that specification in recent post in this mailing list regarding having done both a PKI-operation benchmark (using BSAFE library) profile as well as business benefit profile of the specification (when it was initially published ... before any operational pilots):
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI

with regard specifically to BSAFE processing bloat referenced in the above ... there is folklore that one of the people, working on the specification, admitted to a adding a huge number of additional PKI-operations (and message interchanges) to the specification ... effectively for no other reason than the added complexity and use of PKI-operations.

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Aug, 2010
Subject: Re: Five Theses on Security Protocols
MailingList: Cryptography
On 08/01/2010 04:08 PM, Anne & Lynn Wheeler wrote:
Old past thread interchange with members of that specification team regarding the specification was (effectively) never intended to do more than hide the transaction during transnmission:
https://www.garlic.com/~lynn/aepay7.htm#norep5 non-repudiation, was re: crypto flaw in secure mail standards


re:
https://www.garlic.com/~lynn/2010l.html#82 Five Theses on Security Protocols

oops, finger-slip ... that should be:
https://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was re: crypto flaw in secure mail standards

my archived post (14July2001), references earlier thread in commerce.net hosted, ansi-standard electronic payments list ... archive gone 404 ... but lives on at the wayback machine; aka from 1999 regarding what did SET intend to address
https://web.archive.org/web/20010725154624/http://lists.commerce.net/archives/ansi-epay/199905/msg00009.html

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC History

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Aug, 2010
Subject: CSC History
Blog: Cambridge Scientific Center Alumni
some VNET related trivia ...

I had gotten blamed for computer conferencing on the internal network in the late 70s and early 80s. The folklore was that five of the six members of the executive committee (chairman, ceo, president, etc) wanted me fired. That pretty much (further) ruled out promotions. However, w/o the title (and/or pay), there was some corporate subterfuge that provided me project funds to almost operate as if I had the top corporate technical position. misc. past posts mentioning internal network (larger than arpanet/internet from just about the beginning until possibly late '85 or early '86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

One of my hobbies in the 80s was high-speed data transport project ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

this included working with NSF on what would become the NSFNET backbone. I've commented before that possibly a major reason that the NSFNET backbone RFP called for T1 was I already had T1 (and faster links internally). For various internal politics, we weren't allowed to bid on the NSFNET RFP ... even after the director of NSF wrote a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) ... which actually just made the internal politics worse (little things like references to what we already had running was at least five years ahead of all NSFNET backbone bid responses to build something new). Note that the winning NSFNET bid, didn't actually install T1 links ... they installed 440kbit/sec links ... and then possibly to look like they were meeting the letter of the RFP ... installed T1 trunks and telco multiplexors to run the 440kbit links over (tcp/ip is the technology basis for the modern internet, NSFNET backbone was the operational basis for the modern internet, and CIX was the business basis for the modern intenet). We made some snide remarks that those T1 trunks were in-turn multiplexed over T3 trunks (or even T5 trunks) somewhere in the telco network ... so why weren't they "claiming" (NSFNET) "T5 backbone". misc. old NSFNET related email from the period
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

By the time I was doing HSDT, the person responsible for VNET had already left the company ... but I got him hired onto the HSDT project as contractor.

Now some of the people that show up responsible for blocking being able to bid on NSFNET backbone RFP ... also show up later transferring cluster scale-up. The issue with being promoted after I had left, was possibly part of some machinations with regard to cluster scale-up; since they had boxed themselves in with regard to who was responsible for cluster scale-up ... there was no basis for non-compete clause (especially in California) ... and w/o the non-compete clause, they were worried that I could go off and do it again for some other vendor (like HP or Sun).

--
virtualization experience starting Jan1968, online at home since Mar1970




previous, next, index - home