List of Archived Posts

2011 Newsgroup Postings (01/01 - 01/21)

I actually miss working at IBM
Is email dead? What do you think?
No command, and control
Microsoft Wants "Sick" PCs Banned From the Internet
Is email dead? What do you think?
Is email dead? What do you think?
IBM 360 display and Stanford Big Iron
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
zLinux OR Linux on zEnterprise Blade Extension???
Typewriter vs. Computer
zLinux OR Linux on zEnterprise Blade Extension???
Typewriter vs. Computer
zLinux OR Linux on zEnterprise Blade Extension???
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
IBM Future System
545 Tech Square
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
IBM Future System
zLinux OR Linux on zEnterprise Blade Extension???
IBM Future System
zLinux OR Linux on zEnterprise Blade Extension???
System R
zLinux OR Linux on zEnterprise Blade Extension???
Julian Assange - Hero or Villain
Julian Assange - Hero or Villain
Julian Assange - Hero or Villain
Searching for John Boyd
Personal histories and IBM computing
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Data Breaches: Stabilize in 2010, But There's an Asterisk
Julian Assange - Hero or Villain
CMS Sort Descending?
IBM Historic Computing
Preliminary test #3
CKD DASD
CKD DASD
CKD DASD
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
The FreeWill instruction
Julian Assange - Hero or Villain
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
CKD DASD
CKD DASD
CKD DASD
What do you think about fraud prevention in the governments?
CKD DASD
What do you think about fraud prevention in the governments?
What do you think about fraud prevention in the governments?
What do you think about fraud prevention in the governments?
speculation: z/OS "enhancments"
speculation: z/OS "enhancments"
What do you think about fraud prevention in the governments?
speculation: z/OS "enhancments"
America's Defense Meltdown
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
SIE - CompArch
Speed of Old Hard Disks
Two terrific writers .. are going to write a book
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks
Speed of Old Hard Disks - adcons
shared code, was Speed of Old Hard Disks - adcons
America's Defense Meltdown
Speed of Old Hard Disks - adcons
Today, the IETF Turns 25
subscripti ng
Speed of Old Hard Disks - adcons
Chinese and Indian Entrepreneurs Are Eating America's Lunch
shared code, was Speed of Old Hard Disks - adcons
Utility of find single set bit instruction?
Today, the IETF Turns 25
The Imaginot Line
Two terrific writers .. are going to write a book
Utility of find single set bit instruction?
Date representations: Y2k revisited
digitize old hardcopy manuals
Make the mainframe work environment fun and intuitive
Two terrific writers .. are going to write a book
Mainframe upgrade done with wire cutters?
HELP: I need a Printer Terminal!
America's Defense Meltdown
The Curly Factor -- Prologue
Mainframe upgrade done with wire cutters?
History of copy on write
History of copy on write
History of copy on write

I actually miss working at IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Jan, 2011
Subject: I actually miss working at IBM
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010o.html#79 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#50 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#52 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM

This is recent post regarding organization structure in linkedin Boyd group ... mentioning doing some research at the national archives regarding my wife's father's unit in ww2
https://www.garlic.com/~lynn/2010p.html#10
this references a set of books he was awarded at west point for some distinction
https://www.garlic.com/~lynn/2009m.html#53

he had been given command of the unit in mid-44 and I found the unit's status reports ... above has small extract from one. After Europe, he was made adviser in the east
https://en.wikipedia.org/wiki/Chiang_Kai-shek

and he was allowed to take his family with him to nank(j)ing. My wife has story about being evacuated on 3hrs notice in army cargo plane when the city was ringed (arriving at tsingtao airfield after dark, they used truck & auto lights to illuminate the field) ... and then living on USS Repose in tsingtao harbor for several months (where her sister was born). One post (with some images of envelop postmarks from the Repose)
https://www.garlic.com/~lynn/2006b.html#33
post mentioning the Soong Sisters
https://www.garlic.com/~lynn/2008n.html#32

He also did stint at MIT ... even ran into some IBMers that claimed he was their favorite instructor at MIT; he was mentioned in some old mit pubs ...
https://www.garlic.com/~lynn/2010l.html#46

Two sons took path thru different services ... although one passed thru pendleton ... both went thru devens.

In one of the Boyd group postings on adapting organization infrastructure, I mention visiting a son-in-law, back only 2 weeks after having been a year in fallujah. It sounded like fire fight every day (lots of surface dings and other stuff, but fortunately nothing really serious) ... was in a program to help people adjust to a less hostile environment ... it took more than 2 weeks.

The conventional wisdom in silicon valley startups during the 80s and 90s ... was not to hire IBMers because they had large corporation indoctrination ... and it was significant culture shock to work in startup. Somewhat unrelated, there is claim that common characteristics shared by successful startups during the period was that they had completely changed their business plan at least once in the first two years ... aka adaptable/agile (Boyd) OODA-loop. Recent reference in Boyd discussion
https://www.garlic.com/~lynn/2010o.html#70

I've previously mentioned that the communication group was heavily into preserving the terminal emulation install base. One example was senior disk engineer getting talk scheduled at annual, worldwide, internal communication group conference and opening talk with the statement that the communication group was going to be responsible for the demise of the disk division (because communication group could veto/block all the "distributed computing" friendly products the disk division attempted to bring out). Another example was the workstation division being forces to use standard microchannel products, including the microchannel 16mbit T/R product (when the per card thruput of 16mbit T/R was less than the workstation AT-bus 4mbit T/R card; aka design point of 300+ stations sharing same bandwidth for terminal emulation). misc. past posts mentioning terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation
recent post in the "Is email dead?" discussion
https://www.garlic.com/~lynn/2010q.html#48

I've also characterized SAA as attempting to put the client/server genie back into the bottle (as part of preserving the terminal emulation install base). At the time we had come up with 3tier (and middle layer) networking architecture and was out pitching to customer executives ... and taking lots of barbs from both the SAA and T/R forces. misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

part of the 3-tier customer executive pitch (from old post)
https://www.garlic.com/~lynn/99.html#202 Middleware - where did that come from
and
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine

not long after the above pitch from late '80s, high performance enet cards starting appearing for $69 (down from $300, reducing $/mbit to $481 in above from $625). 3-tier was also heavily used in '80s response to large gov TLA RFI ... and the only response brought in for followup.

Part of the issue was that twisted pair and CAT4 support was shipping for Ethernet. An example was the new Almaden lab which had been wired for CAT4, assuming T/R. However, they found that Ethernet would have both better aggregate thruput as well as lower latency (than 16mbit T/R; orthogonal to the per card thruput issue). The T/R forces were releasing "analysis" comparing T/R vis-a-vis Ethernet ... but it appeared that the comparison was using old 3mbit Ethernet specification before listen before transmit. At about the same time there was live Ethernet performance measurement report that appeared in annual ACM SIGCOMM ... past reference to the article
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency

There were somewhat two separate issues, the technology efficiency of enet vis-a-vis T/R ... and the higher level SNA lacking a network layer.

disclaimer ... my wife is co-inventor on early (IBM) token-passing patent.

for a little humor ... old email mentioning MIT lisp machine project requesting 801 processors from the company ... and being offered 8100 instead:
https://www.garlic.com/~lynn/2003e.html#email790711
in this post
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics)

above post included numerous other old email mentioning 801. Post also mentions Bob Evans asking my wife to do a technical audit of 8100 (not long before it was killed).

--
virtualization experience starting Jan1968, online at home since Mar1970

Is email dead? What do you think?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Jan, 2011
Subject: Is email dead? What do you think?
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010q.html#39 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#46 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#48 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#49 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#51 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#54 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#57 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#62 Is email dead? What do you think?

recent post about JES & VM/RSCS driver for 6670
https://www.garlic.com/~lynn/2010h.html#59

which includes reference to work on sherpa/APA6670
https://www.garlic.com/~lynn/2006p.html#email820304
in this post about laser printers
https://www.garlic.com/~lynn/2006p.html#44

old post referencing an 1Apr84 corporate memo on passwords ... printed on corporate letterhead and put in bldgs bulletin boards over the weekend
https://www.garlic.com/~lynn/2001d.html#52

the SJR 6670 driver had enhanced for dual paper ... with alternate paper drawer loaded with different colored paper and used to print "separator" page. On the separator page would go randomly selected "sayings" from various sources (ibm jargon, and some others).

At one point we were having some conflict with corporate security auditors that were "visiting" the site (over whether games should be made available). They were doing off-hrs sweep looking for unsecured classified material. 6670s were out in all the dept areas ... and at one they came across some output that had (random) definition for auditors. The next day, they complained we were picking on them. reference in this old post with actual quotation
https://www.garlic.com/~lynn/2005f.html#51
longer winded discussion in this more recent post
https://www.garlic.com/~lynn/2010k.html#49

--
virtualization experience starting Jan1968, online at home since Mar1970

No command, and control

From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Jan, 2011
Subject: No command, and control
Blog: Boyd Strategy
re:
https://www.garlic.com/~lynn/2010p.html#8 No command, and control
https://www.garlic.com/~lynn/2010p.html#66 No command, and control
https://www.garlic.com/~lynn/2010q.html#69 No command, and control

in (computer) security business there is periodic distinction made about "snake oil" ... vendors attempting to sell "new" technology ... for the sake of getting your money. such "snake oil" frequently has characteristic that it doesn't effectively work ... and also tries to lock you into lots of ongoing updates (as a business plan for re-occurring revenue stream) ... strongly related to the Success Of failure thread that intertwines thru this.

I've also periodically touched on similar subject involving financial payment infrastructure ... where financial institutions use the possibility of fraud to increase what is charged other players on the field (making a substantial profit). technology that would eliminate such fraud ... could have downside of nearly half the bottom line for numerous large financial institutions (somewhat the financial industry analogy to the military/industrial gold platted scenarios).

--
virtualization experience starting Jan1968, online at home since Mar1970

Microsoft Wants "Sick" PCs Banned From the Internet

From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Jan, 2011
Subject: Microsoft Wants "Sick" PCs Banned From the Internet
Blog: Information Security Netework
re:
https://www.garlic.com/~lynn/2010q.html#56 Microsoft Wants 'Sick' PCs Banned From The Internet

Sometimes I wonder if such stuff is obfuscation and misdirection. There have been articles that "adult entertainment" is what made both the video business as well as the internet. In the late 90s, one webhosting company observed that they had ten customers ("adult entertainment" websites) all with higher "hits" per month than the top entry in the "public" list of websites with highest number of hits (i.e. adult entertainment was doing so much business, they didn't need to participate in such things). They also noted that credit card fraud was almost non-existent for such websites ... while it ran as high as 50% for some of their customer websites selling software & games (statistically significant difference in the honesty of the respective customers).

--
virtualization experience starting Jan1968, online at home since Mar1970

Is email dead? What do you think?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Jan, 2011
Subject: Is email dead? What do you think?
Blog: Greater IBM
VNET/RSCS used the cp spool file system for intermediate storage. The spool file system had a simulated unit record interface that "batched" data into 4k spool/page records for disk read/write. For VNET/RSCS there was also 4k page read/write diagnose interface. The issue was that the 4k page (disk) I/O operations were "synchronous" ... aka the virtual machine was non-runnable while the read/write I/O was being performed. Heavy spool file system might be doing 30-40 disk i/os with all the virtual machines ... a single virtual machine might get as little as 4-5 4K operations per second (say 20k-30k bytes/sec).

HSDT had multiple full-duplex T1 links (150kbytes/sec in each direction, 300kbytes/sec aggregate for each link). I did both tcp/ip and vnet over HSDT. For VNET/RSCS, I needed minimum of ten times the thruput of standard spool file interface ... and in aggregate closer to 100 times the throughput.

So I tweaked the VNET/RSCS code in various sundry ways to support full-duplex operation, try to minimize dependence on spool file interface ... and for some of the satellite T1 & faster links ... had some fancy stuff like "rate-based" pacing to mask the propagation delays.

But for VNET/RSCS HSDT, I still needed 100 times the thruput of standard cp spool file interface ... so I did a re-implementation of the cp spool function in vs/pascal that had an enormous number of enhancements to achieve the 100 times improvement of thruput.

email from long ago and far away:

Date: 6 March 1987, 10:29:05 PST
To: wheeler
Subject: IPO changes

I would really like to see your spool file enhancements running on all the backbone nodes, but with the way things have been going lately, I'm not sure that it will ever happen. My guess is that the backbone management will resist the changes since they won't want to divert resources from the "strategic" solution of converting the backbones to SNA.

With Pete and Terry on your side, it would be much easier to get your changes installed. But, it looks like they are leaving and Oliver Tavakoli is taking over Pete's job of IPO RSCS development. Maybe you can convince Oliver that your changes are worthwhile and get his help putting them into IPO RSCS. If not, you're probably out of luck.

The last Backbone meeting and the most recent VNET Project Team meeting (was supposed to be this week at Almaden but has been postponed) have had attendance limited to managers only (no technical people). This makes me think that the VNET management don't think that technical problems or solutions are important.


... snip ... top of post, old email index, HSDT email

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

long winded old post discussing some of the spool file changes written in vs/pascal:
https://www.garlic.com/~lynn/2005s.html#28

misc. past posts mentioning internal networks
https://www.garlic.com/~lynn/subnetwork.html#internalnet

note ... the internal network NOT being SNA was something of embarrassment to the communication group. we heard some very interesting feedback about representations made to the executive committee regarding what would happen to the internal network if it wasn't converted to SNA. just one of many (from a couple days before the above email, PROFS is a VTAM application)
https://www.garlic.com/~lynn/2006x.html#email870302

and this post
https://www.garlic.com/~lynn/2006w.html#21
SNA/VTAM for NSFNET with snippets of email (misinformation) pushing sna/vtam for the NSF backbone
https://www.garlic.com/~lynn/2006w.html#email870109

past posts in this thread:
https://www.garlic.com/~lynn/2010q.html#39 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#46 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#48 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#49 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#51 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#54 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#57 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#62 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#1 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

Is email dead? What do you think?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Jan, 2011
Subject: Is email dead? What do you think?
Blog: Greater IBM
we had been called in to help wordsmith the cal. electronic signature law ... a lot of it was around making electronic documents and electronically transmitted (electronic) documents, have some legal status. The insurance industry had a lot of interest in being able to have legally binding insurance contracts over the net.

there was also a lot of (special interest) lobbying by various companies to have specific technologies mandated by the electronic signature law ... in some cases the technologies were orthogonal to the legal definition of what constitutes a "human signature" (for contractual purposes).

the internal network had both email as well as "instant messages". "instant messages" was much more lightweight and much more oriented towards relatively immediate two-way communication (closer to voice calls). emails tended to the other end of the spectrum, tended towards much higher payload with less immediacy. There was some overlap between the two in the middle. Some of this was analyzed in more detail by the researcher that was paid to sit in the back of my office for nine months ... mentioned upthread ... also here:
https://www.garlic.com/~lynn/2010q.html#49

it was also possible to manipulate "instant messages" under program control ... one such implementation was a multi-user spacewar game. old (ling winded post discussing programming interface for "instant messages"
https://www.garlic.com/~lynn/2006w.html#16
and
https://www.garlic.com/~lynn/2006k.html#51

old post mentioning multi-user spacewar done in 1980 using "instant messages"
https://www.garlic.com/~lynn/2001h.html#8
and
https://www.garlic.com/~lynn/2010d.html#74

supposedly the multi-user spacewar was to have human player interacting on 3270 interface. fairly early somebody wrote a automated player program ... which started beating all the human players. the game then had to be adjusted to penalize automated programs that were able to make moves significantly faster than human players.

past posts in this thread
https://www.garlic.com/~lynn/2010q.html#39 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#46 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#48 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#49 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#51 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#54 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#57 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#62 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#1 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 display and Stanford Big Iron

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Jan, 2011
Subject: IBM 360 display and Stanford Big Iron
Blog: IBM Historic Computing
IBM 360 display and Stanford Big Iron
http://infolab.stanford.edu/pub/voy/museum/pictures/display/3-1.htm

note some number of univ. got 360/67s for TSS/360 .and then did something else with them ... Michigan developed the MTS virtual memory system, Stanford developed Orvyl and Wylbur.

Wylbur was original done as part of Orvyl
http://www.stanford.edu/dept/its/support/wylorv/
and ...
http://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
and
http://www.stanford.edu/dept/its/support/wylorv/

SLAC became a vm370 virtual machine installation ... and much later, the SLAC vm system got the first webserver outside CERN
http://www.slac.stanford.edu/history/earlyweb/history.shtml

Simplex was more like a 65 ... with virtual memory (associative array/dat box) added. 360/67 was designed for up to 4-way ... i know of no 4-ways and only a few 3-ways. Most multiprocessors were duplex and was significantly different from 360/65 multiprocessor. 360/65 duplex shared real storage (up to 1mbyte each per processor for max of 2mbytes) ... but channels were dedicated per processor. 360/67 multiprocessor shared both real storage and channels ... all processors could access all channels. Also shared real storage was multi-ported ... "half-duplex" could have higher thruput than "simplex" in heavy I/O environment ... because of reduced memory bus lockout between processor access and I/O access.

360/67 functional characteristic manual ... on bitsavers
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

has drawing of front panel layout on pg. 41 (no obvious difference between 65 & 67)

Image of channel controller switches (allowed sharing and/or dedicating memory and channels) ... pg. 32 & 33 (this was low free-standing box/panel ... slightly resembling 2914 channel switch). The switches were mapped to control register assignment ... pg. 34 gives layout. standard "duplex" ... the control registers were used to just "sense" the channel controller switch settings. On the 3-way ... it was possible for processor to change the hardware configuration by changing the values in the control registers.

the stanford web page has front panel of faa with some rotary switches on the left.

I have vague recollections of seeing a 360/62 product booklet ... showing 4-way. The original 60, 62 & 70 were "1 microsecond" memory. Somewhere along the way memory was upgraded to 750 nanosecond ... and the models became 65, 67, and 75.

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Newsgroups: alt.folklore.computers
Date: Sun, 02 Jan 2011 14:26:25 -0500
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
:-) At this point I usually switch to the automotive analogies that were making the rounds a while ago. Here's an excerpt:

Assembler: a Formula 1 race car. Very fast, but difficult to drive and expensive to maintain.

Fortran II: a model T Ford. Once it was king of the road.

Fortran IV: a model A Ford.

Fortran 77: a six-cylinder Ford Fairlane with standard transmission and no seat belts.

C: a black Firebird - the all-macho car. Comes with optional fuzzbuster (escape to assembler).


just reading Iconoclast ...
https://www.amazon.com/Iconoclast-Neuroscientist-Reveals-Think-Differently/dp/1422115011

mentions that Ford was working in electrical industry and conventional wisdom was that electricity was going to win as power source. Ford decided leave electricity and develop gasoline engine. Got a two cylinder engine developed for Model A and sold less than 1000 ... enuf to stay in business. The major break came by adopting new "french" steel that was three times stronger with the addition of vanadium (or only needed 1/3rd the steel and therefor 3 times lighter):
https://en.wikipedia.org/wiki/Vanadium

Reducing the weight by 2/3rds with two cylinder engine ... made the Model T a significantly more attractive vehicle.
https://en.wikipedia.org/wiki/Model_T

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???

sort of contrary ... this really showed up in the 90s ... nearly any level of sophistication can be put into today's processor and upfront cost goes to zero with large enough volume.. The remaining is cost related to number of pieces, connectors, etc ... desktop tends to have fewer connectors because of cost proportional to pieces. however, server configurations tends to have much larger number of pieces. recent article from today

Larry Ellison's 10-Point Plan For World Domination
http://www.informationweek.com/news/global-cio/interviews/showArticle.jhtml?articleID=228900228

from above:

1. The Exadata Phenomenon: the power of optimized systems. Noting that in competition for data-warehousing deals "it's not uncommon for our Exadata machine to be 10 times faster than the best of the competition," Ellison stressed that the new competitive battleground in corporate IT will be centered on optimizing the performance of hardware-plus-software systems that have been specifically engineered for superior performance, speed of installation, and minimum of integration:

... snip ...

and a few posts from a little over a year ago:
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#49 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009p.html#54 big iron mainframe vs. x86 servers

and from last spring
https://www.garlic.com/~lynn/2010g.html#32 Intel Nehalem-EX Aims for the Mainframe

and the corporation's quandary regarding supporting the above
https://www.garlic.com/~lynn/2010j.html#9

recent desktop technology

Lab Tested: Intel's Second-Generation Core Processors Unveiled
http://www.pcworld.com/article/215318/lab_tested_intels_secondgeneration_core_processors_unveiled.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Typewriter vs. Computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: Typewriter vs. Computer
Blog: Greater IBM
slightly related historical artical regarding ibm selectric typewriter (also used for 2741 and other computer terminals)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/KEYTOPS.HTM

I had gotten a portable 2741 (two 40lb suitcases) at home in Mar1970 ... which was fairly shortly replaced with a "real" 2741. The 2741 could be used offline as a normal selectric typewriter.(a typewriter computer terminal blurring the lines between what is a typewriter and what is a computer).

This was back when online computer use was metered and "charged for" (even if it was only corporate funny money). At one point, my manager came to me and complained that I was using half of all the computer time and could I do something to cut back. I suggested I could work less ... the subject was never mentioned again.

also article by the same person mentioning selectric typewriter regarding ASCII and EBCDIC being one of the biggest computer goofs ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???

The Nehalem-EX Aims for the Mainframe and the followup post mentioning the corporation had to play
https://www.garlic.com/~lynn/2010j.html#9

in the above ...

Windows, Linux propel Q1 server sales, Unix boxes, mainframes stalled
http://www.theregister.co.uk/2010/05/27/idc_q1_2010_server_nums/

as well as this article

IBM goes elephant with Nehalem-EX iron; Massive memory for racks and blades
http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/

from above:

With so much of its money and profits coming from big Power and mainframe servers, you can bet that IBM is not exactly enthusiastic about the advent of the eight-core "Nehalem-EX" Xeon 7500 processors from Intel and their ability to link up to eight sockets together in a single system image. But IBM can't let other server makers own this space either, so it had to make some tough choices.

... snip ...

The issue was how many connectors were needed for the task customer needed. Part of the exadata issue is the ease which it is able to tailor and optimize environment for specific task. The analogy on the mainframe ... what features are selected, as well as specialty engines and LPARs. LPARs started out as PR/SM ... subset of virtual machine function pushed down into the hardware ... allowing real machine to be partitioned into multiple different logical entities ... in large part because of the complexities of trying to handle everything as single environment (partitioning, tailoring & subsetting for nearly all existing mainframe environments has become standard operating procedure).

from today ... IBM doing new generation XEON based instead of Power for supercomputer ... as opposed to configured for commercial and dbms ... somewhat "From the Annals of Release No Software Before Its Time"

Germans pick Xeon over Power for three-petaflops super
http://www.theregister.co.uk/2011/01/03/prace_lrz_supermuc_super/

--
virtualization experience starting Jan1968, online at home since Mar1970

Typewriter vs. Computer

From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: Typewriter vs. Computer
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011.html#9 Typewriter vs. Computer

I was trying to track down 3270 cord keyboard ... this was somewhat like somewhat flattened half-sphere (or a large mouse) with depressions/indentations for fingers and rocker switches at the fingertips. Human factors found (same) people could achieve higher typing rates than standard two-handed QWERTY

In today's environment ... the unit could also implement mouse function.

Since wrist and arm doesn't move for key pressing ... would have much less problem with various kinds of repetitive motion injuries (and QWERTY layout is left-over from slowing down typists becomes of issues with internal typerwriter mechanical motion)

from long ago and far away ...

Date: 03/20/80 14:53:09
To: wheeler

nat rochesters chord keyboard is written up in IEEE COMPUTER, dec. 1978 pg 57. (co-authors Bequaert, Frank C, and Sharp, Elmer M. will send you a copy if you want (probalbe easier to get copy from your library.


... snip ... top of post, old email index

Date: 07/03/80 13:24:45
To: wheeler
...
• 25 cord keyboards were manufactured at $100,000. Plug comaptible replacement for 3277 keyboard. There is one in San Jose area with IBM human factors group here. Looks like I can borrow for a couple of weeks.


... snip ... top of post, old email index

Nat's IEEE article
http://www.computer.org/portal/web/csdl/doi/10.1109/C-M.1978.218024

current article (in typing injury faq)
http://www.tifaq.org/accessories/chording-keyboards.html

above mentions (conventional) cord keyboards are slower ... because much fewer "keys". Rochester's keys under the fingertips had multiple positions ... so could achieve much faster.

wiki
https://en.wikipedia.org/wiki/Chorded_keyboard

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???

There was possible issue raised whether blades would be configured for doing commercial i/o environments. there tends to be a lot more press being given to enormous computational capability and massive i/o for supercomputer interconnect ... but there seems to be much more actually sold for commercial environments (as opposed to numerical intensive only). The DB2 and exadata references are using basic technology that nearly 20 yrs earlier was being pushed by the corporation for "scientific and technical only" (aka numerical intensive) ... quote from 2/17/92
https://www.garlic.com/~lynn/2001n.html#6000clusters1

for a long time, the corporation attempted to depreciate the commercial applicability of some of the same platforms that they were using for high-end supercomputer (which represented significant aggregate processing and I/O capacity).

There was threads last year in ibm-main (mainframe) mailing list (originated on bitnet in the 80s) regarding mainframe having huge number of i/o channels as a desirable feature (for commercial environment) ... the counter argument was that the huge number of i/o channels were required because of serious limitation in the mainframe channel i/o architecture (attempting to turn a serious shortcoming into desirable attribute) ... a couple posts in those threads:
https://www.garlic.com/~lynn/2010f.html#18 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2010h.html#62 25 reasons why hardware is still hot at IBM

also from today, more information on the new generation xeon ...

Intel Officially Unleashes Its Second Gen Core i-Series CPUs on the World
http://www.dailytech.com/Intel+Officially+Unleashes+Its+Second+Gen+Core+iSeries+CPUs+on+the+World/article20538.htm

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Newsgroups: alt.folklore.computers
Date: Mon, 03 Jan 2011 23:13:31 -0500
hancock4 writes:
What was COBOL--a Buick Roadmaster? Your father's Oldsmobile?

re:
https://www.garlic.com/~lynn/2011.html#7 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

step up from rpg?? which was step up from plug-board

pierce arrow?
https://en.wikipedia.org/wiki/Pierce-Arrow
pierce arrow did some number of trucks
http://www.pierce-arrow.org/history/hist7.php

white motor truck?
https://en.wikipedia.org/wiki/White_Motor_Company

mack truck?
http://www.macktrucks.com/default.aspx?pageid=40

international harvester?
https://en.wikipedia.org/wiki/International_Harvester
did some trucks
https://en.wikipedia.org/wiki/List_of_International_Harvester_vehicles
100 yrs of IH trucks
http://ihtrucks100.ning.com/

i learned to drive @eight on '38 flatbed chevy ... old picture

38chevy?
https://www.garlic.com/~lynn/38yellow.jpg

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Jan, 2011
Subject: IBM Future System
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System

As an aside ... one of the features of FS was "one-level store" ... aka a purely (virtual) memory model for access all storage. This is similar to what was done in tss/360 and Multics. Now, part of what I in early 70s (while FS was going on) was a paged-mapped filesystem for CP67/CMS ... drawing on a lot of things that I saw "done wrong" in tss/360.

At the univ., the 360/67 had been brought in for tss/360 and there was periodic testing of tss/360 (mostly on weekends when otherwise the machine room was suppose to be all mine). There was a very capable TSS/360 SE on the account that did a large amount of work during his weekend time ... working on TSS/360 ... including identifying (and submitting APARS on) large number of tss/360 failures. Part of the issue was TSS/360 memory mapping was extremely simple ... which resulted in enormous thruput issues. The work I did on paged-mapped filesystem was to not fall victim to similar short-comings. The result was significantly shorter pathlength (compared to the regular CMS pathlength) while significantly improving throughput of filesystem operations.

Much later vm370/cms thruput tests on 3081/3380 ... for moderate I/O activity workload ... my paged-mapped CMS filesystem showed approx. three times the thruput (compared to standard CMS filesystem) with lower processor pathlength/overhead. Misc. past posts mentioning doing CMS paged-mapped filesystem implementation
https://www.garlic.com/~lynn/submain.html#mmap

Various of the shortcomings didn't really showup with the s/38 implementation of "one level store" ... since it had much lower throughput expectations. However, a major characteristic was significant backup/recovery elapsed time. Since S/38 did scatter allocation across all available disks ... a recoverable system image required backing up and restoring all disks as single integral operation (in some cases was claimed to take days). Because of scatter allocation (across all disks) ... any single disk failure took the whole system down.

One of the things they let me do after transferring to SJR in the 70s ... was getting to play disk engineer in bldg 14&15 ... one of the engineers there is (only) name on the original "RAID" disk patent. S/38 was possibly the earliest adopter of RAID technology as masking single disk failure (because even single disk failure was so catastrophic). misc. past posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

For other topic drift ... part of the paged mapped filesystem work played significant role when I later rewrote cp spooling function with objective of increasing throughput by a factor of 100 times ... as part of HSDT activity ... part of recent long-winded discussion in Greater IBM in "Is email dead?" discussion
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

he upthread Future System reference
https://www.garlic.com/~lynn/2010p.html#78

mentions ridiculing the FS effort ... including drawing parallels with the effort and a cult film playing in central sq. I would also claim that I had already implemented some number of features that were better than what they were blue skying about in the FS activity.

Many of the FS stories make reference to competitive efforts all getting killed off during the FS period ... so when FS was killed ... the 370 hardware and software product pipelines were "dry" ... and there was a mad rush to get stuff back into the 370 product pipelines (this is claimed to also have enabled the clone processors to gain market foothold). Some of the stuff that I had been doing all along on 360/370 ... was picked up for vm370 release 3 (including a very small subset of the paged-map stuff having to do memory sharing ... reworked for non-paged-mapped environment and released as something called DCSS) ... and then other stuff went out as part of my resource manager.

All-in-all it wasn't very career enhancing ... either ridiculing FS and/or having stuff that really worked ... contributing to being frequently told that I didn't have a career in the company.

--
virtualization experience starting Jan1968, online at home since Mar1970

545 Tech Square

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Jan, 2011
Subject: 545 Tech Square
Blog: Cambridge Science Center Alumni
remember that development group split off from the science center and moved to the 3rd floor (shared with gov TLA), taking over the space & people of the (IBM) Boston Programming Center. The telephone closet on the 3rd flr was in the ibm side ... and the panels were labeled with their respective companies (including the gov three letter acronym). In the early 70s, some organization stationed people all over the boston and cambridge area and then called the boston federal gov. offices with a bomb threat in the particular gov three letter agency ... waiting to see what bldg. was evacuated.

This is a tech sq, cp67 and multics story
http://www.multicians.org/thvv/360-67.html

mentioning CP67 failing 27 times in one day ... at a time when Multics was crashing a lot ... but taking hrs to restart (claim that it motivated the new file system for multics). the actual cp67 crash was the TTY terminal support I had added as undergraduate. I had played some games with one byte value (since TTY was limited to 80 bytes). The crash involved somebody increasing the max line to 1200 bytes (for some ascii plotter? device at harvard) ... which resulted in buffer overrun.

In the very early 70s, I used to have regular weekend and evening (dedicated, stand-alone) time on the science center's 360/67. The science center was on the 4th flr of 545 tech sq ... but the machine room was on the 2nd flr. The machine room had exterior class along two walls .. and a wall of offices along another wall. One of the offices was effectively a small tape library ... which I had to periodic access for "backup" tapes ... to restore system to previous production version (after I had built some new flavor for testing).

The tape library door would sometimes be locked ... so I would have to go up and over the wall through the ceiling tiles. One weekend, it was late at night and I found the door was locked. I was tired and not feeling like going over the top ... so I kicked the door once right next to the door knob. Now these were solid heavy wood doors ... but the wood split from the top to the bottom along the edge ... and opened. It turns out it was no longer the tape library ... which had been moved to another location in the machine room ... it now held the center's personnel records.

Monday, the door was removed and "tape" was placed across the opening. The "old" door was taken to the 4th floor and used to create a "memorial" table (laid across two, two drawer file cabinets) ... in hallway at one-end of the science center area (stayed there for years, I guess as a reminder to me to not kick a door).

somewhat science center ... includes a tale about science center visit from some people (our of Melinda's history document)
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

I was doing lots of cp67 mods as undergraduate and sometimes would get requests from IBM for specific kinds of modifications. I didn't learn about the referenced customers until much later ... but in retrospect, some of the requests could have originated from the referenced customer set.

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
Newsgroups: alt.folklore.computers
Date: Tue, 04 Jan 2011 14:12:08 -0500
John Levine <johnl@iecc.com> writes:
I never used a /195 but I did use Princeton's /91. It had 2MB of core (no virtual memory, so jobs that used a lot of it only ran late at night) and as I recall two banks of 2314 disks, which would be a total of 320MB of disks online at once. They had tape drives and large libraries of tapes and disks that the operators mounted as needed. The cycle time was 60ns, i.e. 17MHz.

These days a $200 netbook has more RAM than the /91 had total online storage, and runs a lot faster, too. No tapes, though.


the 370/195 at SJR ran MVT batch ... and some stuff could have 3 month turn-around. the 195 had 64 entry pipeline and peak thruput of 10mips ... but didn't have branch prediction and/or speculative execution ... so typical branch would drain the pipeline ... and unless carefully crafted for the pipeline ... most codes ran more like 5mips. 370/195 never did get virtual memory ... it was just the initial 370 announcement (w/o virtual memory) with a few new instructions ... and some better hardware RAS.

palo alto science center (further up silicon valley) did some tweaks for running batch under cms with checkpoint/restart ... and would run application on their vm370/cms 145 in "background" ... mostly getting its cpu cycles offshift and weekends. The 145 was about 1/30th the 195 ... but they found they could get turn-around in a month on the 145, that was taking 3 months on sjr's 195.

I mentioned that they let me play disk engineer across the street in bldg 14&15. One of the things was that the labs over there had been running dedicated, stand-alone (around the clock, 7x24) scheduled machine for disk test & development. They had once tried MVS for supporting multiple concurrent testing ... but found it had 15min MTBF in that environment. I undertook to rewrite I/O supervisor so that it would never fail or hang ... so they could do multiple, concurrent, on-demand testing (greatly increasing their productivity). misc. past posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

now, the disk labs tended to get #3 or #4 engineering processor ... as soon as processor engineers had one to spare ... for disk testing (i.e. not only did they test new disks with existing processors ... but also did all sorts of disk testing with new processors).

bldg. 15 got engineering 3033 #3(?) ... which was approx. 4.5MIP processor. now it turned out that disk testing ... even multiple concurrent testing ... was very low cpu utilization ... possibly only a percent or two ... leaving lots of compute power on the engineering machines for doing other stuff.

now one of the relatively high priority jobs on SJR 195 was air-bearing simulation ... part of designing new disk generation ("floating") heads ... but it was still getting something like a couple week turn around (for hr or two of cpu, simulation run). so one of the things setup ... was to get air-bearing simulation running on the 3033 in bldg. 15 ... which would result in being able to get several turn-arounds a day.

misc. past posts mentioning air-bearing simulation:
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#13 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#18 virtual memory
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
https://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007e.html#43 FBA rant
https://www.garlic.com/~lynn/2007e.html#44 Is computer history taught now?
https://www.garlic.com/~lynn/2007f.html#46 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007i.html#83 Disc Drives
https://www.garlic.com/~lynn/2007j.html#13 Interrupts
https://www.garlic.com/~lynn/2007j.html#64 Disc Drives
https://www.garlic.com/~lynn/2007l.html#52 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2008k.html#77 Disk drive improvements
https://www.garlic.com/~lynn/2008l.html#60 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2009c.html#9 Assembler Question
https://www.garlic.com/~lynn/2009k.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
Newsgroups: alt.folklore.computers
Date: Tue, 04 Jan 2011 14:40:06 -0500
re:
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

another 370/195 story is eastern airlines ran its res system (ACP) System/One on 370/195. one of the "nails" in the Future System "coffin" ... a couple recent posts in (linkedin) IBM Historic Computing thread:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System
https://www.garlic.com/~lynn/2011.html#14 IBM Future System

was that if System/One was put on Future System implemented in fastest existing technology (aka 370/195), it would have the thruput of 370/145 (aka about 30 times reduction in thruput, FS had a lot of hardware peformance overhead). misc. past posts mentioning Future System
https://www.garlic.com/~lynn/submain.html#futuresys

Much later System/One was being used as the basis for (european res system) "Amadeus" ... and my wife served short stint as chief architect. She sided with the europeans for use of x.25 as accesss (against SNA) ... which resulted in the SNA forces getting her replaced (it didn't do them much good since Amadeus went with x.25 anyway). misc. past posts mentioning Amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
https://www.garlic.com/~lynn/2007h.html#12 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2008c.html#53 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2009j.html#33 IBM touts encryption innovation
https://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009r.html#59 "Portable" data centers

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Jan, 2011
Subject: IBM Future System
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System
https://www.garlic.com/~lynn/2011.html#14 IBM Future System

The cp67/cms group split off from the science center and took over the space & people from the Boston Programming Center on the 3rd flr (3rd flr shared with what was listed as a legal firm but turned out to be front for certain gov. TLA). They started to grow substantially as part of morphing into vm370 development group ... and moved out into the vacant SBC bldg (SBC having been "transferred" to CDC as part of legal settlement) out in Burlington Mall

Some number of the people in the vm370 development group were told that big promotions would come if they transferred to FS group in Kingston. Then with the death of FS, there was mad rush to get products back into pipeline ... there was 3033 mad rush (which was started out being 168 logic mapped to faster chips) in parallel with starting on "811" (370/xa)

In 1976, the favorite son operating system in POK managed to convince the corporation that VM370 product had to be killed and all the people in burlington mall transferred to POK, or otherwise they wouldn't be able to meet the mvs/xa ship schedule. Endicott was eventually able to save the VM370 product mission, but they had to recreate a development group from scratch.

There were quite a few people from burlington mall that wouldn't move ... many remaining in the area and going to work for DEC (there is joke about head of POK being a major contributor to VMS for all the people).

oh, and the move to POK was leaked to the burlington mall people several months before the official announcement. There appeared to have been planning on only giving the people a very short period between announcement and the move (minimize they would find some alternative). There was then a major witch hunt attempting to find out who was responsible for the leak

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#12 zLinux OR Linux on zEnterprise Blade Extension???

The leading wave in distributed computing was with 43xx machines in the late 70s and early 80s ... a lot of mainframe "big" datacenters were being max'ed out in floor space and environmental services. Many big corporations started ordering (vm370) 43xx machines (in some cases, hundreds at a time) and putting them out into every nook & cranny. Internally, it resulted in difficulty scheduling conference rooms ... since so many of the rooms were being co-opted for 43xx (every dept. getting their own 43xx).

In 1985, I produced a number of documents (RMN.DD.00x) proposing rack drawers with arbitrary population of 370 "cards" and 801/risc "cards" (early form of blades w/o the physical enclosure for every card) ... achieving extremely high-density rack based computing (major problem was getting heat out of the rack).

As interconnect technology improved ... it allowed a lot of the processing that had squirreled away all over the corporation to be pulled back into the datacenter. In fact, in the late 80s, a senior disk engineer got a talk scheduled at annual, internal, world-wide communication group conference and opened the talk with statement that the communication group was going to be responsible for the demise of the disk division (the communication group provided such poor interconnect crossing the datacenter walls that it was accelerating data fleeing the datacenter to more interconnect friendly platforms).

One of the big datacenter applications has been overnight (mainframe cobol) batch (window) ... frequently involving lots of transactions that had real-time kick-off during the day ... but waited until overnight batch for completion/settlement. In the 90s, billions were spent by financial firms on (parallel) straight-through processing (designed to leverage large number of "killer micros"). Part of the issue the overnight batch window was being stressed by increased workload along with globalization contributing to reducing the window size. Unfortunately they hadn't done speeds&feeds on the parallelizing technology ... which turned out to introduce factor of hundred times overhead (compared to overnight cobol batch), totally swamping any anticipated thruput increases (or even handle existing workload). The resulting failures have continued to cast long shadow over the industry (and contributes to large amounts of datacenter mainframe use ... some large computer sizing driven by 3rd shift batch window).

A couple yrs ago, I pitched a "new" parallelization technology to financial industry (with possibly only 3-5 times the overhead of cobol batch, rather than hundred or more times) for straight-through processing ... basically methodology that mapped business rules to fine-grain SQL operations ... and then leveraged the enormous amount of parallelization work done for RDBMS products. For the most part, people in the industry were extremely apprehensive, risk adverse, not wanting to make any changes and still feeling the effects of the failures from the 90s.

Not only CICS wasn't multiprocessor ... but ACP/TPF wasn't either. When 3081 came out as multiprocessor only ... it resulted in lots of stress regarding clone competition. Eventually single-processor 3083 was brought out (effectively a 3081 with one of the processors removed ... there was problem that processor 0 was at the top, easiest was to remove processor 1 in the middle ... but that made the box dangerously top heavy). To help compete with clone processors in the ACP/TPF market, the 3083 also had bunch of customized microcode for that environment.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Jan, 2011
Subject: IBM Future System
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System

I made the original proposal for rehosting in this spring '82 advanced technology conference that I put on ... referred to in this old post:
https://www.garlic.com/~lynn/96.html#4a

it then acquired some corporate interest ... including possibly using TSS/370 as prototype base ... old post with mentioning trying to compare complexity of the then existing vm370 kernel with the tss/370 kernel
https://www.garlic.com/~lynn/2001m.html#53

the corporate interest somewhat ballooned and things got totally out of hand (somebody claimed at one point something like 300 people just writing documentation ... might be considered a mini-FS). At one point there was a large collection of people in a Kingston cafeteria banquet room ... and the cafeteria people had a sign calling it the "ZM" (rather than VM) meeting ... a name that stuck for awhile.

At one point in the 60s ... there were approx. 1200 people working on TSS/360 compared to approx. 12 people working on cp67/cms. With TSS/360 decommit, the group imploded and relative development sizes had almost inverted in the 80s (reflected in the tss/370 lines of code).

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#12 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???

... a reference to Hartmann here
https://www.garlic.com/~lynn/96.html#4a

in real-time, just now in post to "IBM Future System" discussion in the (linkedin) "IBM Historic Computing" group ... also archived here
https://www.garlic.com/~lynn/2011.html#20

for other drift, I had pretty much eliminated any requirement for DMKSNT with my page-mapped stuff, that I had done for cp67/cms ... and then converted to vm370 ... mentioned in this recent post in "IBM Future System" thread in (linkedin) "IBM Historic Computing" group
https://www.garlic.com/~lynn/2011.html#14

but when they were picking up bits & pieces of stuff I had been doing for 370 (all during the Future System period when 370 stuff was being killed off), they only selected a small subset of the CMS stuff and remapped it into DMKSNT infrastructure. old email referring to cp67 to vm370 conversion
https://www.garlic.com/~lynn/2006v.html#email731212 731212,
https://www.garlic.com/~lynn/2006w.html#email750102 750102,
https://www.garlic.com/~lynn/2006w.html#email750430 750430

misc. posts referring to various parts of page-mapped stuff
https://www.garlic.com/~lynn/submain.html#mmap

Besides 43xx being leading wave of distributed computing ... it was also directly threatening some of the big iron in the datacenter ... 4341 cluster had smaller footprint, less cost, and higher thruput than 3033. There is folklore about head of POK cutting East Fishkill allocation for critical 4341 manufacturing component, in half (slightly analogous to the foreign auto maker import quotas in the period)

At one point, my wife had been con'ed into going to POK to be in charge of loosely-coupled (aka cluster) architecture where she developed Peer-Coupled Shared Data architecture ... some past posts
https://www.garlic.com/~lynn/submain.html#shareddata

... which saw very little uptake, except for IMS hot-standby, until sysplex. contributing to her not remaining long was also periodic battles with the communication group to use SNA for loosely-coupled coordination (the truces where she could use anything she wanted within the datacenter were only temporary). There was an internal, high-thruput vm/4341 cluster implementation ... which got significantly slower when they were forced into SNA before shipping to customers (example were some cluster-wide coordination operations that had taken small fraction of second became multiple tens of seconds).

Just before going to pok to take up responsibility for loosely-coupled architecture ... my wife had been in the JES group and one of the "catchers" for ASP to JES3 ... also co-author of "JESUS" .... design for "JES 2/3 Unified System" (included all features from both that neither group could live w/o).

with respect to concurrent processing ... and some recent articles about whether or not SQL, ACID, RDBMS, etc are required .... I posted this in Oct last year in (linkedin) Payment Systems Network
https://www.garlic.com/~lynn/2010n.html#61

which references article about Chase online banking taking some amount of time to recover
http://www.computerworld.com/s/article/9187678/Oracle_database_design_slowed_Chase_online_banking_fix?taxonomyId=9

where I refer to Jim as father of modern financial dataprocessing ... formalization of transaction semantics and ACID definition ... were instrumental in raising the confidence that financial auditors have in computer records. Also mentions Jim palming of a bunch of stuff on me when he left for Tandem (including DBMS consulting with IMS group). Some of the current discussion is whether or not ACID properties are actually required for everything.

--
virtualization experience starting Jan1968, online at home since Mar1970

System R

From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Jan, 2011
Subject: System R
Blog: IBM Historic Computing
URLs to other System/R pages
http://www.mcjones.org/System_R/citations.html

including misc. of my postings mentioning System/R, SQL/DS &/or misc RDBMS
https://www.garlic.com/~lynn/submain.html#systemr

somewhat related to the System/R period ... was Jim Gray ... these recent posts mentions father of modern financial dataprocessing .... with formalizing semantics of transactions ... resulted in giving financial auditors higher confidence in computer records.
https://www.garlic.com/~lynn/2010o.html#46
and
https://www.garlic.com/~lynn/2010n.html#61

above posts reference this older post
https://www.garlic.com/~lynn/2008p.html#27

the above also references Jim palming off a bunch of stuff on me when he left for Tandem.

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Jan, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
recent post in ibm 2321 (data cell) thread in IBM Historic Computing group
https://www.garlic.com/~lynn/2010q.html#67

mentions univ. (when I was undergraduate) was selected to be beta test site for original CICS product ... and I was tasked to support/debug CICS. Above also references Yelavich webpages ... since gone 404 ... but lives on at the wayback machine.

slightly earlier post from last year
https://www.garlic.com/~lynn/2010c.html#47

referencing being in datacenter that proclaimed having 120 CICS regions ... above also references Yelavich wayback pages giving both CICS multi-region support (1980) and multiprocessor exploitation (2004).

Part of efficient multiprogramming & thread safe was introduced with compare&swap. Recent post discussing working to get compare&swap instruction included in mainframe ... the POK favorite operating system people had vetoed including it (seeing no useful purpose). The "owners" of the architecture book (superset of principles of operation) said that in order to get compare&swap included, uses other than kernel locking had to invented. Thus was born a lot of the multithreaded examples that are still included in current principles of operation.

and more recent
https://www.garlic.com/~lynn/2010q.html#12

the above post was in recent "mvs" thread regarding POST in cross-memory environment (from ibm mainframe mailing list). Note that lots of other platforms started supporting compare&swap semantics in the '80s ... especially for large, highly threaded DBMS.

misc. past posts mentioning CICS
https://www.garlic.com/~lynn/submain.html#cics

misc. past posts mentioning SMP, multiprocessing, multithreaded and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

note: compare&swap was invented by Charlie at the science center while working on fine grain locking for cp67 multiprocessing. "CAS" was chosen for instruction name since they are charlie's initials.

for the fun of ... previous item upthread refers to this old post regarding an advanced technology conference I ran in the early 80s
https://www.garlic.com/~lynn/96.html#4a

it also references that originally the SUN people had first come to IBM with proposal to do SUN workstations ... and were eventually turned down ... in large part because numerous internal groups viewed it as competition to what they were hoping to do (most of which never happened).

with regard to RAS ... this is old ('84) presentation from JIm
https://www.garlic.com/~lynn/grayft84.pdf

things fundamentally haven't changed since then ... ranks outages as: 1) people - mostly, 2) software - often, and 3) hardware - rarely. The "people" issues not only applies to mistakes resulting in outages but also applies to things like security exploits. Common countermeasure to "people" mistakes have been "KISS" ... as well as partitioning to minimize/limit the scope of any one mistake.

this is past NASA dependable computing conference where Jim and I were keynote speakers:
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

Jim and I got into something of running debate at '91 ACM SIGOPS over whether I could provide high availability with commodity parts ... this is somewhat the existing mainframe "DASD" story ... all current mainframe "DASD" is nearly identical to the standard commodity stuff ... but with availability/redundancy layer on top as well as significantly complex CKD simulation layer. I've pontificated in numerous posts about offering favorite son operating system FBA support and being told I had to show an incremental $26M business case ... to cover education and training ... and was not permitted to use total life cycle cost savings as part of the justification. lots of past posts about getting to play disk engineer in bldgs 14&15 (also makes reference to MVS having 15min MTBF in the environment):
https://www.garlic.com/~lynn/subtopic.html#disk

the upthread reference to doing an adtech conference in '82 also mentions it was the 1st such corporate event since '76 ... when we presented 16-way 370 project. We were going great guns ... and had co-opt'ed spare time of some of the 3033 processor engineers to help. Then somebody mentioned to the head of POK that it might be decades before the POK favorite son operating system could support 16-way ... and the whole thing fell apart. Some of us were invited to never show up in POK again and the 3033 processor engineers were told to keep their noses on the (3033) grind stone.

Partitioning and replication tends to mean the probability of multiple concurrent failures (which tends to the product of the individual failure probabilities .... minus any complexity management). Single operation tends to be the sum of the individual failure probabilities of all components (aka any individual failure results in outage ... so an outage becomes the sum of the failure probabilities rather than the product of the failure probabilities).

Distributed state maintenance for many things (like DBMS) was assisted with Jim's work on formalizing semantics of transactions and ACID properties ... his work on forming TPC also helped with apple-to-apple (performance, cost, price/performance, etc) comparisons
http://www.tpc.org/information/about/history.asp

old post about doing highly parallel distributed lock manager for DBMS (& other) operation ... mentioning Jan92 meeting in Ellison's conference room
https://www.garlic.com/~lynn/95.html#13

I had also worked out the semantics of doing recovery when direct cache-to-cache transfers were involved (instead of first writing to disk) ... which made lots of people apprehensive and took another decade or more before they were actually comfortable with it. I took some amount of heat since the mainframe DBMS groups complained it would be a minimum of another five years before they could even think about such support.

past posts in this thread:
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#12 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???

--
virtualization experience starting Jan1968, online at home since Mar1970

Julian Assange - Hero or Villain

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Jan, 2011
Subject: Julian Assange - Hero or Villain
Blog: Information Security
the pentagon paper analogies ... have much of the focus on Assange as obfuscation and misdirection. In the pentagon papers case, there was (at least) the leaking of the information, the national papers publishing the leaked information, as well as the student protesters and the national guard. it is possibly easier to blur the issues and players when it is electronic.

Ralph Nader: Wikileaks and the First Amendment
http://www.counterpunch.org/nader12212010.html
A Clear Danger to Free Speech
http://www.nytimes.com/2011/01/04/opinion/04stone.html?_r=1

--
virtualization experience starting Jan1968, online at home since Mar1970

Julian Assange - Hero or Villain

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Jan, 2011
Subject: Julian Assange - Hero or Villain
Blog: Information Security
re:
https://www.garlic.com/~lynn/2011.html#24 Julian Assange - Hero or Villain

I wasn't referring to your mention of pentagon papers ... but several articles drawing analogy to pentagon papers ... and raising the issue of role of assange & wikileaks ... could be considered obfuscation and misdirection (in the pentagon papers analogy, it would be somewhat like totally focusing on the owners of NYT ... and ignoring all the other issues).

IBM had something similar to the pentagon papers in the very early 70s ... when a copy of a document detailing the unannounced 370 virtual memory was leaked to the press. part of the re-action was to retrofit all the corporate copying machines with serial number that appears on every page copied. A flavor can somewhat be seen at the very bottom of each page ... in this old file (serial number copy convention would become part of standard corporate culture for decades):
https://www.garlic.com/~lynn/grayft84.pdf

As part of the next major product effort ("Future System" ... massive product effort that was eventually canceled w/o ever being announced) in the early 70s ... would implement soft copy, online only documents ... specially secured online system where the documents could only be read from directly connected 3270 terminals (dumb display devices with no copying or hardcopy facilities). All of this was countermeasure to the previous incident involving leaked document copy.

I've periodically confessed over the years that when they taunted me about even I couldn't break their security ... it was one of the few times when I demonstrated breaking a system.

misc. past posts mentioning the specially secured system for "Future System" documents:
https://www.garlic.com/~lynn/2008h.html#83 Java; a POX
https://www.garlic.com/~lynn/2009r.html#41 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2010e.html#6 Need tool to zap core
https://www.garlic.com/~lynn/2010f.html#1 More calumny: "Secret Service Uses 1980s Mainframe"
https://www.garlic.com/~lynn/2010j.html#32 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2010n.html#73 Mainframe hacking?
https://www.garlic.com/~lynn/2010q.html#4 Plug Your Data Leaks from the inside
https://www.garlic.com/~lynn/2010q.html#8 Plug Your Data Leaks from the inside

--
virtualization experience starting Jan1968, online at home since Mar1970

Julian Assange - Hero or Villain

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Jan, 2011
Subject: Julian Assange - Hero or Villain
Blog: Information Security
re:
https://www.garlic.com/~lynn/2011.html#24 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#25 Julian Assange - Hero or Villain

another aspect of "trying to keep secrets" involves industrial espionage and trade secrets. in a court case from 30 yrs ago (in this case plans for new disk technology) ... suing for billions in damages ... there was ruling effectively required to show security proportional to risk ... people can't be held accountable for walking off with extremely valuable stuff that is left just laying around (analogy is swimming pool owners held responsible for minors drowning if there aren't fences and other countermeasures).

a few recent (archived) posts on the subject in discussions from other groups:
https://www.garlic.com/~lynn/2010q.html#4
https://www.garlic.com/~lynn/2010q.html#8
https://www.garlic.com/~lynn/2010q.html#18
https://www.garlic.com/~lynn/2010q.html#25
https://www.garlic.com/~lynn/2010q.html#36
https://www.garlic.com/~lynn/2010q.html#53

--
virtualization experience starting Jan1968, online at home since Mar1970

Searching for John Boyd

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Jan, 2011
Subject: Searching for John Boyd
Blog: Boyd's Disciples
for a little topic drift ... there has been some comments about the highest gov. classification is "downright embarrassing"

A Clear Danger to Free Speech
http://www.nytimes.com/2011/01/04/opinion/04stone.html?_r=1

after the Spinney Time magazine article. Boyd would include in briefing some detail of the preparations leading up to article ... and would claim that afterwards pentagon had introduced a new security classification "NO-SPIN" ... aka unclassified but not to be given to Spinney. He also said that the SECDEF knew he (Boyd) was behind it and how the SECDEF had Boyd banned from the pentagon and tried to have him re-assigned to some place in Alaska (a ban that was quickly overturned).

part of the magazine article background was that everything was taken from congressional hearing the previous friday ... which was held in the same room where the pentagon paper hearings were held. there was lots of political positioning between the sides regarding the hearings ... which included getting the hearings moved to late friday ... when presumably they would draw the least public/press attention. on sat., there supposedly was a damage assessment meeting in the pentagon which concluded that there had been little attention paid to the hearing. then the magazine hit the stands the following week. part of the preparation was a file of written authorizations for every piece of information covered in the hearings.

total aside ... I also had a chance to reference the Pentagon Papers ... in my long-winded posts in "The Great Cyberheist" discussion
https://www.garlic.com/~lynn/2010p.html#40 The Great Cyberheist
https://www.garlic.com/~lynn/2010p.html#47 The Great Cyberheist
https://www.garlic.com/~lynn/2010p.html#49 The Great Cyberheist
https://www.garlic.com/~lynn/2010q.html#3a The Great Cyberheist

--
virtualization experience starting Jan1968, online at home since Mar1970

Personal histories and IBM computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Jan, 2011
Subject: Personal histories and IBM computing
Blog: IBM Historic Computing
mid-70s, x-over from Future System reference (to doing an adtech conference in '82) also mentions it was the 1st such corporate event since mid-70s ... at which we presented 16-way 370 SMP project. We were going great guns ... and had co-opt'ed spare time of some of the 3033 processor engineers to help. Then somebody mentioned to the head of POK that it might be decades before the POK favorite son operating system could support 16-way ... and the whole thing fell apart. Some of us were invited to never show up in POK again and the 3033 processor engineers were told to keep their noses on the (3033) grind stone.

unbundling was result of various legal actions and resulted in starting to charge for software, SE services, and various other things (kernel software was still free) past posts mentioning 23jun69 announce
https://www.garlic.com/~lynn/submain.html#unbundle

HONE was originally established with several US cp67 (aka virtual machine) datacenters to provide SEs with substitute for traditional SE learning at customer account ("hands-on" guest operating systems) ... sort of apprentice as part of SE "team"; with charge for SE services, nobody could figure out how to not charge for "apprentice/learning" SEs at customer account. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

A lot of stuff I had done for cp67 was dropped in the simplification morph to VM370. This old emails mentioning convering a bunch of stuff from cp67 to VM370 ... for CSC/VM ... one of my hobbies was providing&supporting highly enhanced production operating systems to internal datacenters:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Part of the stuff converted from cp67 to vm370 was paged mapped filesystem for cms ... some past posts
https://www.garlic.com/~lynn/submain.html#mmap

and something I call virtual memory services (that were layered on top of paged mapped filesystem). some past posts
https://www.garlic.com/~lynn/submain.html#adcon

a small subset of the cms changes (w/o the page mapped filesystem) was picked up and released as DCSS in vm370 release 3 (this could be considered somewhat part of the leading edge of the mad rush to get stuff back into 370 product pipeline after the demise of FS) ... which was mapped into the vm370 existing "DMKSNT" function ... mentioned in this recent mainframezone post
https://www.garlic.com/~lynn/2011.html#21

HONE had fairly early transitioned from providing SEs with "hands-on" experience to providing online sales & marketing support applications ... mostly in cms/apl. A performance "enhancement" for HONE was creating a "shared" executiable image of APL (to reduce real storage requirements) ... which eventually came to include a lot of "APL" code that was common for the branch office environment ("SEQUOIA"). Prior to my installing the pagged mapped & virtual memory services (csc/vm based on vm370 release 2) at HONE, this was accomplished by defining a "DMKSNT" ipl'able system that included both the CMS kernel and APL executable image. Branch offices accounts then were setup to automatic IPL this special image ... dropping them directly into the APL "SEQUOIA" environment.

Problem was that there was a large number of sales/marketing support "configurators" that were heavy compute intensive (features needed for specific hardware orders ... especially when thruput/performance related). Being mostly APL ... HONE was already pegging all its processor meters. To help reduce processor use, some number of configurators were recoded in FORTRAN ... achieving significant processor use reduction (better than 10-20 times). The issue was there was no trivial/transparent way to drop out of the SEQUIOA/APL IPL'ed environment into standard CMS for execution of the FORTRAN applications and then "re'IPL" the SEQUIOA/APL environment. HONE was early adopter of my paged mapped & virtual memory changes ... since it was possible to do all the virtual memory shared page function transparently (thru the CMS program execution image loading, from a paged mapped filesystem). CMS could even do its own kernel image "IPL" via the paged mapped shared page function (eliminating all requirement for the "DMKSNT" function). In any case, it was now trivial for HONE to automatic start (shared) APL/SEQUOIA, have it transparently drop into standard CMS execution of fortran execution, and then resume (shared) APL/SEQUOIA.

a lot of Future System is already mentioned in another discussion, misc. past posts mentioning Future System
https://www.garlic.com/~lynn/submain.html#futuresys

As an aside, in the early to mid 70s I got some amount of overseas trips, being asked to handle some number of HONE system clonings at datacenters around the world ... including the move of EMEA hdqtrs from NY to Paris (especially in the early 70s, it took some fiddling to electronicly read my email back in the states).

As mentioned several times, a small subset of the CMS changes were picked up, remapped into the DMKSNT function and released as DCSS in vm370 release 3.

Now standard part of 360 architecture is base+displacement addresssing ... which provides quite a bit of location independence for executable code. However, OS/360 included executable conventions that resolved absolute address constants contained in executable images (binding image to specific address). While TSS/360 had some number of paged mapped deficiencies ... it did support a paradigm where executable images were kept location independent (no embedded absolute address constants; it was paradigm supported by the other virtual memory oriented systems including Multics). I did quite a bit of work on supporting address independent executable images ... but since CMS had adopted so much of the os/360 infrastructure ... it was a constant battle (lots of detail in many of the "submain.html#adcon" list of posts). In my original changes for sharing CMS additional function, the additional shared pages went at the top of the each virtual memory ... and since it was location independent there was the same single copy regardless of CMS virtual memory size (and its location in a virtual address space). In the DMKSNT paradigm, there was a different copy for every virtual memory size supported at an installation.

So one of the other things started spring '75 ... Endicott had come by and asked me to help them with the work on the (ECPS) virgil/tully (aka 138/148) vm microcode assist ... some of the details given in this post:
https://www.garlic.com/~lynn/94.html#21

Also in Jan. '75 another group had come and con'ed me into helping them with design for a 370 5-way SMP processor. These were not the high-end machines ... so there wasn't a cache issue ... but they had lots of microcode capability availability. As a result, I designed somewhat a superset of some of the ECPS function ... a little bit like SIE ... however, I also did some number of other microcode functions ... queued I/O interface (somewhat superset of the later 370/xa i/o) and something that masked some amount of multiprocessor details (a little bit like what was done later in intel 432). Some number of past posts mentioning this 5-way SMP effort (which was eventually killed before ever being announced)
https://www.garlic.com/~lynn/submain.html#bounce

The previously mentioned 16-way effort was sort of between the end of the 5-way effort and after it was killed ... working on putting out standard 370 2-way smp support in standard vm370 product.

Now, another part of putting out pieces that I had been working on was my "resource manager" ... which was decided to ship as separate item (in mid-release 3 time frame) and also make it the guinea pig for starting to charge for kernel software (somewhat consequence of Future System distraction allowing clone 370 makers to gain market foothold). Part of the policy was only kernel software that didn't directly support hardware could be charged for. past posts mentioning my resource manager and/or scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare

the "quick" implementation for standard SMP support to ship in vm370 release 4 ... was adapt parts of my design for 5-way ... w/o all the microcode enhancements. the problem was SMP was direct hardware support and therefor free ... however, the 5-way design had several critical dependencies on code I had already shipped in my "charged-for" resource manager.

the "free" kernel software policy precluded free software having a charged for software pre-requisite. the eventual solution was to move something like 90% of the lines-of-code from my "release 3" resource manager into the "free" release 4 vm370 (while continuing to charge the same for the remaining "release 4" resource manager). past posts mentioning SMP support (and/or compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

eventually the company transitioned to charging for all (including kernel) software ... and also the "object-code-only" policy.

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Newsgroups: alt.folklore.computers
Date: Fri, 07 Jan 2011 14:51:47 -0500
... my favorite is still the large national bank that outsourced y2k remediation of critical core processing to lowest bidder ... and only much later discovered it was front for a criminal organization. the folklore is that they had also eliminated their CSO with the claim that it was much more cost effective to handle fraud thru the public relations dept.

... any funding associated with UN, IMF, and/or world bank may be suspect; frequently analogous to the non-profit fund raising companies where maybe less than ten percent actually gets to the charity (news item today that state of oregon just settled with one such).

--
virtualization experience starting Jan1968, online at home since Mar1970

Data Breaches: Stabilize in 2010, But There's an Asterisk

From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Jan, 2011
Subject: Data Breaches: Stabilize in 2010, But There's an Asterisk
Blog: Financial Crime Risk, Fraud and Security
Data Breaches: Stabilize in 2010, But There's an Asterisk
http://www.digitaltransactions.net/news/story/2853

from above:

At first glance, a review of the data-breach scene in 2010 shows signs of improvement, or at least stabilization, according to figures from the Identity Theft Resource Center (ITRC)

... snip ...

different slant on same report:

Little Transparency When It Comes To Data Breaches
http://www.consumeraffairs.com/news04/2011/01/little-transparency-when-it-comes-to-data-breaches.htm

and ...

ITRC Calls for Universal Data Breach Reporting
http://www.esecurityplanet.com/features/article.php/3919846/ITRC-Calls-for-Universal-Data-Breach-Reporting.htm

from above:

A total of 662 significant data breaches were reported in the U.S. in 2010, up 33 percent from 2009, but that's probably only the tip of the iceberg, according to a new report from theIdentity Theft Resource Center (ITRC).

... snip ...

During much of the last decade ... there was a series of Fed data breach legislation introduced that tended to fall into two categories: 1) bills that were similar to the original cal. state data breach notification and 2) data breach notification bills that would eliminate most notification requirements (and preempt existing state legislation).

we were tangentially involved in the original cal. state data breach notification legislation, having been brought in to help wordsmith the electronic signature legislation). several of the participants were heavily involved in privacy issues and had done detailed citizen privacy surveys ... and the number one issue was the form of "identity theft" that is "account fraud" ... where there is fraudulent financial transactions (against existing accounts), a large part being the result of data breaches. At the time, there seemed to be nothing being done about such data breaches and there appeared to be some hope that the publicity from the notifications might motivate corrective action.

... as I've periodically mentioned ... nominally, corporations are motivated to have security countermeasures for threats to the corporations. a big issue with the data breaches ... is the resulting (fraudulent financial transaction) threat was to the public ... not to the respective corporations ... as a result (without the notification publicity), there was little motivation to take countermeasures against the data breaches.

note how same report can be spun in different ways ... from "stabilize" to "up 33%" (for "significant" data breaches)

--
virtualization experience starting Jan1968, online at home since Mar1970

Julian Assange - Hero or Villain

From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Jan, 2011
Subject: Julian Assange - Hero or Villain
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011.html#24 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#25 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#26 Julian Assange - Hero or Villain

some of the excess secrecy (jokes about highest classification is for "downright embarrassing") resulted in FOIA ... background here:
https://en.wikipedia.org/wiki/Freedom_of_Information_Act_%28United_States%29

above goes into some detail about different administrations significantly limiting and unlimiting FOIA.

this is recent post
https://www.garlic.com/~lynn/2011.html#27

about a lengthy Time magazine article in the early 80s on gov. spending ... there was joke that afterwards the pentagon created a new classification to prevent releasing unclassified project funding information to financial analysts (who might understand what was going on).

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS Sort Descending?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: CMS Sort Descending?
Newsgroups: bit.listserv.ibm-main
Date: 8 Jan 2011 10:58:20 -0800
jcewing@ACM.ORG (Joel C. Ewing) writes:
ISPF directory dates are stored in the highly-peculiar IBM Julian Date format variant used for SMF timestamps: a positive-signed PL4 field currently defined as 0cyydddF. Although the formal definition at this point only allows for "0c" being "00" for 19xx years and "01" for 20xx, this format could represent any year from 1900 through 11899 by the obvious extension of the date format to "ccyydddF" where cc is the top two decimal digits of a four-digit representation of "year-1900", so it may be considered roughly equivalent to using a four-digit representation of the year. All code changes we made for Y2K to account for the "01yy" date enhancement assumes this obvious extension for 21xx and beyond. One can make a valid argument that this is a horrid date format to keep for another 98 centuries, but not on the grounds that it can't be made to work. The current definition was an unfortunate consequence of choosing to reduce Y2K remediation work by minimizing date comparison problems at Y2K with pre-2000 dates saved in the older "00yydddF" Julian Date format.

misc. past posts mentioning early '80s internal online discussion on the approaching y2k problem and some examples of date issues:
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#233 Computer of the century
https://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...

above includes mention of shuttle mission computer date issues. total aside ... last (discovery) shuttle mission keeps getting postponed
http://www.pcmag.com/article2/0,2817,2375445,00.asp

recent (long-winded) posts mentioning being at discovery's 1st launch:
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
https://www.garlic.com/~lynn/2010o.html#51 The Credit Card Criminals Are Getting Crafty

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Historic Computing

From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Jan, 2011
Subject: IBM Historic Computing
Blog: Order of Knights of VM
I've done a few posts in IBM Historic Computing group ... many VM related ... also archived here:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives
https://www.garlic.com/~lynn/2010q.html#35 VMSHARE Archives
https://www.garlic.com/~lynn/2010q.html#41 Old EMAIL Index
https://www.garlic.com/~lynn/2010q.html#47 IBM S/360 Green Card high quality scan here
https://www.garlic.com/~lynn/2010q.html#63 VMSHARE Archives
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System
https://www.garlic.com/~lynn/2010q.html#67 ibm 2321 (data cell)
https://www.garlic.com/~lynn/2010q.html#70 VMSHARE Archives
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011.html#22 System R

including this really long winded one:
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Preliminary test #3

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Preliminary test #3
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2011 10:35:19 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Oh the good old days. It reminds me of my DOS (/360) days. Memory was one of the factors we had to keep in mind in blocking files. We *never* could block full-track. Many programs had to use single-buffering to get everything to fit, and I have a vague recollection of having to re-block a master file for a monthly run so that we could get it all in memory.

folklore is that person writing op-code looking for the os/360 assembler was told there was only 256 bytes available for the implementation. as a result, the op-code table was (sequentially) reread from disk for every statement ... resulting in extremely slow assemblies. later performance enhancement was to use a memory resident op-code table for the lookup.

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 09 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
count-key-data was mid-60s trade-off between I/O resources and memory resources. File structure could be kept on disk (minimizing real storage requirements) and lookups performed using I/O operations.

By, at least the mid-70s, the trade-off was inverting ... memory resources were becoming significantly more plentiful than I/O resources ... however, it was becoming impossible to wean the POK favorite son operating system off its implementation.

In the late 70s, I was called into datacenter of large national retailer ... which had several POK high-end processors in a loosely-coupled, shared DASD configuration. They were having enormous throughput bottleneck happening at peak load times. They previously had majority of corporation's experts in to look at the account with little results. They took me into a class room with several tables covered with stacks of daily performance activity print-outs from the various systems. I spent 30-40 minutes leafing through the stacks of paper, and begin to notice a pattern that a specific disk peaked at at couple disk I/Os per second during the worst throughput periods. Cross-checking the different reports ... it appeared that the aggregate I/Os per second (across all systems) would peak at 6-7 I/Os per second.

Additional investigation turned up this was the shared application program disk for all regions. It was large PDS with three cylinder PDS directory (on 3330 disk). A characteristic was that PDS directory lookup would do a multi-track search for application program (member) load. On avg. this would be cylinder and half multi-track search. The first cylinder I/O search would be 19 revolutions (19 3330 tracks per cyl) for disk spinning at 60rev/sec ... or approx 1/3rd sec elapsed time ... during which time the specific disk drive, the associated disk controller, and the processor channel would be locked out for any other use. Basically, each application program load would involve 3 disk I/Os; two searches and a program member load ... taking approx. 1/2 second elapsed time. Two of these program loads could be performed per second ... across all processors in the complex (resulting in the 6-7 aggregate disk I/Os per second).

The recommendations were split the application library, managed the application library PDS directories for much smaller size and replicate the application libraries for each system. misc. past posts mentioning CKD, DASD, multi-track search, and/or fixed-block architecture disks
https://www.garlic.com/~lynn/submain.html#dasd

I had started pontificating about the trade-off transition in the mid-70s. Possibly one of the reason that it came to my attention ... was one of the requirements for my "dynamic adaptive resource manager" was "schedule to the bottleneck" ... which required attempting to real-time, dynamically identify system and workload bottlenecks. misc. past posts mentioning my dynamic adaptive resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

In the early 80s, I was also starting to use a comparison of old 360/67 cp67 system with 3081 vm370 system, making the comment that relative system disk throughput had declined by a factor of ten times (aka disks had gotten faster ... but only 1/10th as much faster as other system components). Some higher up in the disk division took exception with the comments and assigned the division performance group to refute the statement. After a few weeks, they came back and effectively said that I had slightly understated the issue. This analysis was eventually re-spun for a SHARE presentation recommending how to configure disks for improved system throughput.

part of the old cp67/vm370 comparison in this post:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

old post with small piece of presentation B874 at SHARE 63:
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

In the early 80s, I was also told by the favorite son operating system people that even if I provided them with fully integrated and tested "FBA" support, I still needed to come up with a $26M business case to cover publications and education. This had to be profit from new incremental disk sales (say $200M-$300M in incremental sales) ... and could not use total lifecycle savings (in business justification). However, the claim was then that customers would switch from CKD to same amount of FBA (and there wouldn't be any new incremental sales).

for other drift, past posts getting to play disk engineer in bldgs. 14 & 15:
https://www.garlic.com/~lynn/subtopic.html#disk

In the 3375 was really "FBA" ... the discussion was that 3370 was the only "mid-range" disk ... went along with the 43xx product line. One of the issues for POK favorite son operating system ... there was enormous explosion in customer installs of vm/4300 machines. At least, one of the problems for the favorite son operating system being able to participate in the midrange customer install explosion was no support for mid-range (FBA) disk offering. This eventually led to offering "3375" ... basically 3370 hardware with CKD simulation in the controller.

Now days ... all CKD DASD product offerings are done with simulation on some sort of underlying fixed-block architecture device (and the total incremental life cycle costs for this far exceeds the "$26M").

misc. recent posts in mainframe discussion about CKD vs FBA:
https://www.garlic.com/~lynn/2010k.html#10 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010k.html#17 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#76 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#1 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#4 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#41 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010n.html#10 Mainframe Slang terms
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010n.html#15 Mainframe Slang terms
https://www.garlic.com/~lynn/2010n.html#65 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010n.html#84 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010o.html#12 When will MVS be able to use cheap dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 09 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD

another incident involving favorite son OS dependency on CKD was at IBM San Jose research. As mentioned in a.f.c. fortran discussion ... archived here:
https://www.garlic.com/~lynn/2011.html#16

SJR had 370/195 running MVT. There was also a 370/145 that was a dedicated vm370 system for System/R development (original SQL/RDBMS) ... misc. past posts
https://www.garlic.com/~lynn/submain.html#systemr

Then in fall of 1977, SJR got a 370/158 for online interactive VM370/CMS service. By the very late 70s, the 370/195 was replaced with 370/168 running MVS. The MVS and VM370 systems had a common pool of 3330 disks ... however, there was strict operational guidelines that there were "dedicated" 3330 strings (and controllers) for MVS, distinct from the VM370 3330 strings/controllers (MVS 3330 packs were *NEVER* to be mounted on a VM370 string).

One day, an operational goof resulted in an MVS 3330 pack being mounted on VM370 string. Within a few minutes, the datacenter was getting irate calls from users about CMS response time going into the ******. It turns out that the nature of MVS use of CKD channel programs was causing "lockup" of the VM370 controller (severely degrading CMS application throughput and response). There was demand by the VM370 group for the MVS operator to immediately move the pack. The initial response was that they would wait until 2nd shift to move the pack (it was still mid-morning).

The VM370 group had a very high performance VS1 system that had been extensively optimized for running under VM. They mounted a virtual VS1 3330 on an MVS string and started up an application that (also) did large number of multi-track searches. This just about brought MVS throughput to its knees ... and MVS operations immediately agreed to swap the positions of the MVS errant 3330 with the virtual VS1 3330.

These and other incidents have added to the jokes about the horrendously bad MVS/TSO interactive response ... about TSO response is so bad, they don't even notice the enormous response degradation contributed by CKD operation

A similar, but different incident occurred in bldg. 15 in the disk test lab with the 3033 with their service. The 3033 was their primarily for testing with development 3880 controllers and 3380 drives ... but was having increased time-sharing use ... and had two strings of 3330s (16 drives) for that purpose ... the bldg. 15 3033 service is also mentioned in conjunction with moving the new disk head "air bearing" simulation from bldg.28 370/195 across the street to bldg.15 3033
https://www.garlic.com/~lynn/2011.html#16

One monday morning, I got irate call from bldg. 15 asking what I had done to their VM370 service over the weekend. I replied that I hadn't touched the system and asked what they had done. They claimed to haven't touched the system. After some amount of investigation, it turned out that they had swapped the 3830 controller (on the two 3330 strings) with a new development 3880 controller.

The 3830 had a "fast" horizontal microcode engine. The 3880 had a much slower vertical microcode (JIB-prime) engine for (in part for ease of implementing new) "control" function and a separate high-speed hardware "data" path (supporting 3mbyte/sec transfers). Now there was requirement that 3880 be within plus/minus performance of 3830 ... and there was all sorts of situations where the 3880 couldn't meet that requirement. In order to make the 3880 appear like is was close to 3830 throughput, it would present end-of-operation "early" to the channel/processor; depending on slow operating system interrupt processing to mask that the controller was still doing something else. However, the superfast, super reliable VM370 I had in the engineering labs was processing the interrupt and coming back trying to start the next operation ... before the controller had completed the previous operation. Hitting the 3880 controller with next operation redrive, resulted in reflecting SM+BUSY (controller busy) ... which required unscheduling the redrived operation and requeueing. Because the 3880 controller had presented controller busy, when it finally was free, it then had to reflect CUE interrupt. At this point, VM370 could retry the operation a 2nd time. This was resulting in 30percent degradation in overall throughput and significant degradation in response (which is much more sensitive to the added latency and overhead).

Fortunately, this happened while 3880 FCS (first customer ship) was still six months off ... and allowed some additional fixes done for 3880.

On the downside, the POK favorite son operating system didn't have similar level of "experience" with new 3880 controller & 3380 disk. This old email mentions standard regression bucket of 57 3380 hardware errors where MVS fails in 100% of the cases, and in 66% of the cases there is no indication of what forced the re-IPL
https://www.garlic.com/~lynn/2007.html#email801015
in this post
https://www.garlic.com/~lynn/2007.html#2

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 09 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#36 CKD DASD

Another issue with slow 3880 controller involved 3090 manufacturing cost. Because of the slow nature of the jib-prime processing, channel busy was significantly increase (even with 3mbyte/sec transfer). As a result, 3090 decided that to achieve sufficient system throughput, the number of channels would have to be increased. To accommodate the extra channels, an extra TCM would have to be added (six to seven?), increasing the manufacturing costs. There were then jokes that the cost of the extra TCM per 3090 should be charged off against the disk division bottom line (because it was required because the slow 3880 controller).

For topic drift regarding manufacturing cost ... this Future System related article talks about the enormous number of circuits per MIP (compared to clone competition)
http://www.jfsowa.com/computer/memo125.htm

Related to slow 3880 controller and their work around presenting end-of-operation to channel ... before the controller was complete .... had problem that the 3880 might discover certain kinds of errors. Since they had already presented end-of-operation, they tried presenting an independent (asynchronous) "unit check" (not related to any operation). This violated channel architecture. In escalation resolution conference calls with POK channel people, the disk division asked that I'm on all calls. I asked why and the explanation was that the majority of senior disk engineers (with institutional knowledge of channel) had manage to depart (mostly for startups) by the end of the 70s. In any case, the resolution was to "save" the unit check and presented it as CC=1, csw stored when the next operation is started.

There was also significant issue with translating channel programming paradigm (requiring real storage addresses) to virtual memory environment. cp67 addressed it for virtual machines, by creating a "copy" of the virtual machine's channel programs ... substituting real addresses in the CCW copy for the virtual addresses. In cp67, this was performed by routine called CCWTRANS.

For the initial morph of POK favorite son OS (MVT) to VS2, initially a stub of a single virtual address space was created (SVS), laying out MVT into what appeared to be a 16mbyte address space; along with a little bit of code to handle page fault interrupts. The big issue is os/360 programming paradigm had application programs (and/or access method libraries running as part of the application program) creating the channel programs and then executing an SVC0 (i.e. EXCP or EXecute Channel Program). In transition to virtual memory, all these channel programs had virtual memory addresses. So about the biggest effort in this initial conversion was the complexity of creating the channel program copies. This was done by adapting a copy of cp67 CCWTRANS for VS2 (EXCP processing).

Also not specific CKD DASD ... but frequently equated with the same is the channel half-duplex sequential channel programming paradigm.

As processing became faster & faster ... end-to-end latencies were becoming more significant. The end-to-end, sequential, half-duplex became a major throughput bottleneck. This is a recent post regarding highlighting the large number of mainframe "channels" is more along the lines of "featuring a bug" i.e. huge number of channels were a requirement to (also) compensate for increasing inefficiency of the end-to-end sequential nature of channel paradigm.
https://www.garlic.com/~lynn/2010f.html#18 What was the historical price of a P/390?

Other platforms, to mask the increasing relative latency problems, adapted asynchronous I/O conventions (shipping down i/o packages, effectively for remote execution). This significantly reduced the end-to-end synchronous "busy" latency/overhead ... while supporting large numbers of concurrent operations ... w/o requiring a large number of independent "channels".

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or  DOSor Windows
Newsgroups: alt.folklore.computers
Date: Mon, 10 Jan 2011 11:25:55 -0500
Morten Reistad <first@last.name> writes:
The receiving local clearinghouse transfers the money from the ABN account to their account at ABN, and pays out from their account at the local savings bank to Barb's account.

Then the local clearinghouse has to collect from ABN and pay to the local bank, but this stuff is done in batches. It may even involve a courier with a check driving from ABN's office in New York to the clearingshouse's office in the same city.


old story I was told in the 60s about very small town bank in the rockies ... with substantial commercial accounts from large corporations; apparently it avg an extra day or two clearing compared to any other bank in the country (required physical transport of checks).

past post/reference that fedex bread & butter started out with overnight check distribution ... planes carrying checks from all over the country, coverge on large check sorting facility ... the checks go into the sorters ... come out of the sorters ... and then reloaded on the planes for their destination.
https://www.garlic.com/~lynn/2007i.html#0 John W. Backus, 82, Fortran developer, dies

in the late 90s ... there was pilot trials with "electronic checks" (not check21 or check imaging) ... with "digital signatures". A choice for the banks was whether to process the "electronic checks" thru the debit-card network or the ACH network. It turns out that ACH processing tended to add an extra day or two to clearing (compared to debit card) ... with the banks getting the benefit of the money (has been debited from the payor account but not yet credited to the payee account).

there has been lots made in other countries about having moved to same day clearing (sometimes requiring settlement more than once a day). part of this is that US financial institutions have significant profit from payment transaction infrastructure ... in the past avg. around 40% of bottom line (60% for some large institutions, compared to less than 10% for european institutions) ... and are loathe to relinquish the additional revenue.

--
virtualization experience starting Jan1968, online at home since Mar1970

The FreeWill instruction

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The FreeWill instruction.
Newsgroups: comp.arch
Date: Mon, 10 Jan 2011 11:44:11 -0500
there has been enormous amount done in the past decade using MRI studying how the brain operates (identifying what parts of the brain are active for what kind of operations) ... adding to understanding from studies of brain damaged individuals and other techniques.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2585420/

there frequently are one or two new (MRI) studies a week being published

frequent theme in the articles is (especially maturing) brain structure adaptation (with new interconnects) is to make common stimulus/response operations more energy efficient (using less oxygen).

I just finished reading (on kindle) ... which goes into some detail regarding mechanics of sensing, interpreting what is sensed and deciding on response
https://www.amazon.com/Iconoclast-Neuroscientist-Reveals-Think-Differently/dp/1422115011

this is sometimes interpreted within the context of Boyd's OODA-loops (aka observe-orient-decide-act ... I had sponsored Boyd's briefings at IBM in the 80s).

--
virtualization experience starting Jan1968, online at home since Mar1970

Julian Assange - Hero or Villain

From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Jan, 2011
Subject: Julian Assange - Hero or Villain
Blog: Information Security
something in the obfuscation and misdirection theme

Redirecting the Furor Around WikiLeaks
http://www.technewsworld.com/story/71613.html

past posts in thread:
https://www.garlic.com/~lynn/2011.html#24 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#25 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#26 Julian Assange - Hero or Villain
https://www.garlic.com/~lynn/2011.html#31 Julian Assange - Hero or Villain

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Newsgroups: alt.folklore.computers
Date: Mon, 10 Jan 2011 12:35:56 -0500
blp@cs.stanford.edu (Ben Pfaff) writes:
Why don't the banks just deal with each other directly? Or why don't the local clearinghouses just deal with each other directly?

re:
https://www.garlic.com/~lynn/2011.html#38 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

grew up from a NxN (directly connected) physical problems (in the us was 30,000+) ... instead of gateway/backbone like infrastructure which tends to radically reduce the number of direct interconnections

credit cards somewhat grew up that way ... brands providing backbone. merchants being aggregated at merchant acquiring financial institution acquiring financial institution to the various brand "backbones". similarly, individual card holders to issuing financial institutions and the issuing financial institutions to the various brand "backbones".

in the days before internet and consolidation the brand backbones provided backbone/interconnect for the 30,000+ financial institutions (acquirers aggregating the millions of merchants, and the issuers aggregating the hundred of millions of card holders). the brands build up a significant financial operation on the fees charged for use of their backbone. in the past decade with a combination of consolidation and outsourcing, there was a bit of friction with the brands ... with 90% of all transactions (both issuing and acquiring) being performed in six datacenters ... that were all directly interconnected. The processors wanted to bypass transmission thru the brands (and the brand backbone network transaction fee).

some of the (physical) check clearinghouses would involve doing pre-sort for check clearinghouse handled within the local physical area ... and the checks that would have to on planes for national clearing (different per check cost).
https://www.garlic.com/~lynn/2011.html#38 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

There is also different rules and costs between settlement through ACH clearinghouse vis-a-vis clearing through FEDWIRE and SWIFT.

ACH
https://en.wikipedia.org/wiki/Automated_Clearing_House

NACHA
https://en.wikipedia.org/wiki/NACHA-The_Electronic_Payments_Association

FEDWIRE
https://en.wikipedia.org/wiki/Fedwire

SWIFT
https://en.wikipedia.org/wiki/Society_for_Worldwide_Interbank_Financial_Telecommunication

for other drift ... old response to NACHA RFI (done for us by somebody that was NACHA member):
https://www.garlic.com/~lynn/nacharfi.htm

and old NACHA pilot results for the above (gone 404 but lives on at wayback machine):
https://web.archive.org/web/20020122082440/http://internetcouncil.nacha.org/
and
https://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html

some FEDWIRE trivia drift (see "application" section at bottom of page):
https://en.wikipedia.org/wiki/IBM_Information_Management_System

ACH wiki references that credit card goes thru different network. part of renaming ACP (airline control program) to TPF (transaction processing facility) was some number were using ACP for their backbone:
https://en.wikipedia.org/wiki/Transaction_Processing_Facility

for even more drift ... above mentions Amadeus
https://en.wikipedia.org/wiki/Amadeus_%28computer_system%29

recent post mentioning my wife did short stint as chief architect for Amadeus (earlier in another part of this thread):
https://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

--
virtualization experience starting Jan1968, online at home since Mar1970

Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
Newsgroups: alt.folklore.computers
Date: Mon, 10 Jan 2011 13:09:46 -0500
Ahem A Rivet's Shot <steveo@eircom.net> writes:
However looking at it from a customer perspective, a customer of TRAB can very likely put their card into the machine outside a branch of VBB and draw cash which may well be immediately debited from their account in TRAB. They can then walk into VBB and pay this money into a VBB customer's account to which it will be immediately credited. It is reasonable (from an external viewpoint) to wonder why an automated mechanism can't go at least this fast for transfers between an account in TRAB and another in VBB.

re:
https://www.garlic.com/~lynn/2011.html#38 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011.html#41 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

especially in the US ... there was significant financial advantage to the financial institutions with the settlement delays ... and it has been difficult for them to give up that revenue when technology advances (disruptive in terms of amount of earnings).

in europe part of the change related to SEPA
https://en.wikipedia.org/wiki/Single_Euro_Payments_Area

issue has shown up in large number of different ways. in recent news it shows up in recent battles over regulating "interchange fees" for both credit and debit transactions.

early part of this century, there were a number of "safe" payment products developed for the internet and finding high acceptance among all the major internet merchants. merchants in general have been conditioned for decades that "interchange fees" (what is charged merchants for transaction processing) that they carry a large portion that is proportional to fraud (safer products having significant lower fees than those with much higher fraud rates). the whole thing collapsed when the financial institutions decided that they would put a surchange (for the "safe" products) on the highest (fraud/internet) rate (merchants with cognitive dissonance having anticipated a much lower interchange fee). "safe" payments with ubiquitous internet connectivity could result in very disruptive change to "commoditizing" payments ... cutting associative revenue by 90% (for US institutions that could mean 1/3rd or more of their bottom line).

there is a separate issue involving large number of backend systems involved in all kinds of settlement. early transition to electronic involved creating the start of the transaction ... but leaving it for final processing in the existing settlement "overnigh batch" processing. In the 90s, there were billions spent on redoing this batch systems by large number of (mostly financial) institutions. The issue was that with increased volumes ... there was need to do significantly more work during the overnight batch window ... and in many cases, globalization was also attempting to reduce the elapsed time for that window.

It turns out that the re-engineering was looking at using parallelization technology and large numbers of "killer micros" to implement straight-through processing (transactions go completely though to completion, eliminating the overnight batch processing). The problem was that there wasn't speeds&feeds work with the new technology ... and it wasn't until late in pilots that they found the new technology had factor of 100 times increase in overhead (compared to overnight batch) ... totally swamping any anticipated ability to do straight-through processing (and/or even handle existing workloads). That debacle has cast dark shadow over the industry. misc. past posts mentioning straight-through processing debacle:

separate from this ... "settlement" represents systemic risk in international finance ... where something goes wrong and it cascades. existing settlement infrastructures have tried to put in safeguards to prevent cascading problems from taking down financial infrastructure.

re:
https://www.garlic.com/~lynn/2007u.html#19 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#44 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#61 folklore indeed
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#64 folklore indeed
https://www.garlic.com/~lynn/2007v.html#81 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#3 on-demand computing
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#56 Long running Batch programs keep IMS databases offline
https://www.garlic.com/~lynn/2008p.html#26 What is the biggest IT myth of all time?
https://www.garlic.com/~lynn/2008p.html#30 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technolgies?
https://www.garlic.com/~lynn/2008p.html#35 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technolgies?
https://www.garlic.com/~lynn/2008r.html#7 If you had a massively parallel computing architecture, what unsolved problem would you set out to solve?
https://www.garlic.com/~lynn/2009.html#87 Cleaning Up Spaghetti Code vs. Getting Rid of It
https://www.garlic.com/~lynn/2009c.html#43 Business process re-engineering
https://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
https://www.garlic.com/~lynn/2009h.html#2 z/Journal Does it Again
https://www.garlic.com/~lynn/2009i.html#21 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009l.html#57 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009m.html#22 PCI SSC Seeks standard for End to End Encryption?
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009o.html#81 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009q.html#67 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#68 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010c.html#8 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#78 SLIGHTLY OT - Home Computer of the Future (not IBM)
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#3 Assembler programs was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010m.html#13 Is the ATM still the banking industry's single greatest innovation?
https://www.garlic.com/~lynn/2010m.html#37 A Bright Future for Big Iron?
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2011.html#37 CKD DASD

POK=-=Poughkeepsie ... high-end mainframe computers 165, 168, 3033, 3090 MVT, VS2/SVS, VS2/MVS (later just MVS) at one time some overflow to KGN==Kingston 155, 158

END==Endicott mid-range 145, 148, 4341, etc

some can be seen in this old post given some of the new nodes added to internal network during 1983 (reflected in the node names)
https://www.garlic.com/~lynn/2006k.html#3

There were all sorts of internal politics that went on.

In the wake of Future System demise and mad rush to get products back in the pipeline ... the 303x activity was overlapped "XA" effort (which was going to take possibly 7-8 yrs, eventually referred to as "811"). In any case, the mvs/xa effort manage to convince the corporation to kill off vm370 product, shutdown the vm370 development group and move all the people to POK (or otherwise mvs/xa wouldn't be able to make its ship schedule). Endicott manage to save the vm370 product mission ... but had to constitute a group from scratch.

"East Fishkill" manufacturing plant (south of POK and reported to POK chain of command) at one point was producing some critical components for 4341 manufacturing. POK was feeling lots of competitive threats from the mid-range 4341 (higher performance and lower cost than 3031, clusters of 4341 had higher aggregate performance and lower cost than 3033, 4341 was also leading edge of distributed computing ... siphoning lots of processing from high-end datacenters). Folklore is that at one point POK management directed East Fishkill to cut in half the allocation of component for 4341 (in theory cutting in half 4341 shipments, somewhat analogous to auto import quotas from the same period).

In the 70s, the dasd engineering lab had tried doing testing under MVS (being able to have multiple concurrent development testing) ... but found MVS had 15min MTBF in that environment. I then did a rewrite of input/supervisor to never fail/hang ... as part of being able to do multiple, concurrent, on-demand testing (significant productivity improvement compared to 7x24 scheduled around the clock stand-alone testing). I wrote up the results in an internal document. I then got a call from POK MVS group ... which I thot was going to be to talk about all the fixes and enhancements. It turned out what they really appeared to want was to get me fired ... for mentioning the MVS 15min MTBF.

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2011.html#37 CKD DASD
https://www.garlic.com/~lynn/2011.html#43 CKD DASD

in melinda's history of vm (virtual machines) ... there is early discussion about science center trying to justify its work on 360 virtual memory .... the science center had hoped it would have been prime focus for system bid for Project MAC (which eventually went to ge for multics). TSS/360 was to be the prime focus for doing paged virtual memory (for 360/67). the comment had something to do with Atlas (in the UK) had done demand page (virtual memory) and it was "known to not work well".
http://www.leeandmelindavarian.com/Melinda/
http://www.leeandmelindavarian.com/Melinda#VMHist

In any case, science center did cp40 which was virtual machine system with virtual memory on a specially modified 360/40 (with a "blaauw box" providing virtual memory). The science center had original wanted a 360/50 for the project but had to settle for a 40 because all the spare 360/50s were doing to FAA (for air traffic control system). some past refs
https://www.garlic.com/~lynn/2000f.html#59 ,
https://www.garlic.com/~lynn/2001b.html#67 ,
https://www.garlic.com/~lynn/2002c.html#44 ,
https://www.garlic.com/~lynn/2003f.html#10 ,
https://www.garlic.com/~lynn/2004c.html#11 ,
https://www.garlic.com/~lynn/2005k.html#44 ,
https://www.garlic.com/~lynn/2007i.html#14 ,
https://www.garlic.com/~lynn/2007v.html#0 ,
https://www.garlic.com/~lynn/2010e.html#79 ,

when standard 360/67 with virtual memory (officially for tss/360) ... became available, cp40 morphed into cp67. It was installed at univ. (where i was undergraduate) in jan68 (lots of univ. had been talked into ordering 360/67 for tss/360, tss/360 ran into all sorts of trouble and most installations would never put it into production). As an undergraduate I significantly rewrote the demand paging algorithms and system ... that I would claim addressed numerous of the problems.

there was other work in the area going on and shows up in the academic literature of the period ... that differed from what I was doing ... and difference shows up a decade later. Early 80s, somebody was trying to get stanford phd basically in area that I did in the late 60s ... and granting the phd was being vigorously opposed by forces behind alternative approach. I was asked to step in because I actually had performance comparison of the two approaches on the same hardware and system ... showing what I had done was significantly better. past post with more detail:
https://www.garlic.com/~lynn/2006w.html#46

IBM initially announced 370 w/o paged virtual memory support ... but eventually virtual memory was announced for all processors ... after some amount of the full virtual memory architecture was dropped in order to address some serious implementation schedule problems for 370/165. other processors already had full implementation and at least vm370 already had implementation using full architecture (other processors had to drop the additional features and vm370 had big problem to rework with just the subset). old post with some more detail
https://www.garlic.com/~lynn/2006i.html#23

upthread discusses some of the change from MVT to VS2/SVS (initial pok favorite son support for virtual memory) on its way to VS2/MVS. a very big problem was os/360 conventions made extensive use of pointer passing API ... which required kernel services and applications reside in the same address space (even in the transition to MVS). As a result, application, system services and kernel code all had to (still) occupy the same (virtual) address space ... and storage protection keys were still required to provide protection for kernel image (in the same address space) from application code (paged virtual memory did little to eliminate storage protect keys for POK favorite son operating system). recent post discussing subject in little more detail
https://www.garlic.com/~lynn/2010p.html#21

... besides the extensive pointer passing API convention that resulted in enormous problems for MVT mapping into virtual memory environment (really wanted everything to still occupy the same address space) ... there was little nit in the page replacement algorithm; demand page typically requires selecting existing page in memory for replacement. LRU ... least recently used ... basically assumes that the virtual page that has been used the least, is the least likely to be used in the future. Early VS2/SVS simulation work ... showed that if a non-changed page is selected for replacement (copy in memory and already on disk are the same), there is less work (selecting "changed" page for replacement ... requires the replaced page to be first written to disk). I argued with them, that such a change would unnaturally pervert the principle behind LRU. It wasn't until well into the MVS release cycles that they realized that they were selecting non-changed, high-use (typically linkpack) executable pages (for replacement) before selecting lower use, application (private, changed) data pages.

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#37 CKD DASD
https://www.garlic.com/~lynn/2011.html#44 CKD DASD

... oh, and with respect to apparently deeply offending the MVS organization by mentioning the MVS 15min MTBF ... I was told by powers-that-be ... that the MVS group was prepared to non-concur with ever giving me any corporate award for significantly improving dasd development productivity (with never-fail I/O supervisor and being able to do "on-demand", multiple concurrent testing).

brought up in the above references ... was also MVT problem with "subsystems" ... in the transition from SVS to MVS ... while a kernel image continued to occupy every application address space (& retaining requirement for storage protection), "subsystems" move to their own address space. This enormously complicated application "pointer-passing" subsystem calls ... aka how does subsystem in different address space access parameter lists and return data that are in the application address space.

The solution was CSA kludge ... which similar to kernel image ... occupied every address space (and also required storage protect) ... and was place to stuff parameter lists and data for passing back and forth between applications and subsystems. MVS started out with individual 16mbytes address space for every application but gave 8mbytes to the kernel image and 1mbyte for CSA. However, CSA is somewhat proportional to concurrent applications and subsystems ... as systems got bigger, installations were having 4&5mbyte CSA (leaving only 3-4mbytes for applications) ... and some cases threatening to grow to 6mbytes (leaving only 2mbytes for application ... besides the issue of still needing hardware storage protect).

a few past posts mentioning problems with CSA kludge/bloat
https://www.garlic.com/~lynn/2009d.html#54 ,
https://www.garlic.com/~lynn/2009e.html#39 ,
https://www.garlic.com/~lynn/2009f.html#50 ,
https://www.garlic.com/~lynn/2009g.html#71 ,
https://www.garlic.com/~lynn/2009h.html#33 ,
https://www.garlic.com/~lynn/2009h.html#72 ,
https://www.garlic.com/~lynn/2009n.html#61 ,
https://www.garlic.com/~lynn/2009s.html#1 ,
https://www.garlic.com/~lynn/2010c.html#41 ,
https://www.garlic.com/~lynn/2010f.html#14

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
Two years ago at annual economist meeting, one of the news stations broadcast a roundtable ... referring to congress being the most corrupt institution on earth. they concentrated on enormous money spent on lobbying with regard to special tax code provisions (resulting in enormous corruption). The claim was that the current situation has (also) resulted in 65,000+ page special provision tax code which costs something like 6% of GDP.

Proposal was convert to "flat rate" tax code ... which would eliminate enormous amount of lobbying and corruption, and a 400-500 page tax code resulting in a GDP 6% productivity gain (and any downside lost from any specific special provision would be more than offset by the 6% GDP productivity gain).

there has been comments that the only possible effective part of sarbanes-oxley was the section on informants (much of the rest was considered full employment gift to audit companies)

in congressional hearings into madoff, the person that had been attempting (unsuccessfully) for a decade to get SEC to do something about Madoff, testified that tips turned up 13 times more fraud than audits (and that while there may be need for new regulation ... it was much more important to change the culture to one of visibility and transparency).

GAO possibly believing (also) that SEC wasn't doing anything about public company fraudulent financial filings ... started looking at filings and generating reports about uptick in public company fraudulent financial filings (even after SOX, one might be sarcastic that SOX even encouraged fraudulent filings).

congressional hearings into the rating agencies ... had testimony that the rating agencies were "selling" triple-A ratings on toxic CDOs (when both the sellers and the rating agencies knew they weren't worth triple-A; this played pivotal role in the economic bubble/mess). SOX also had provision for SEC to look into the rating agencies ... but (again) nothing seemed to have been done. Some commentator made the observation that the rating agencies may be able to blackmail gov. and avoid any punitive action with the threat of downgrading the gov. credit rating.

--
virtualization experience starting Jan1968, online at home since Mar1970

CKD DASD

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: CKD DASD
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#37 CKD DASD
https://www.garlic.com/~lynn/2011.html#44 CKD DASD
https://www.garlic.com/~lynn/2011.html#45 CKD DASD

Actually starting with release os/360 mft11 (undergraduate at univ and given responsibility for system support) ... i started doing "hand builds" of os/360 release. I would take stage1 output ... i.e. stage2 ... and re-organize lots of the order and add lots of additional job cards (so it wasn't simple single job) and perform it in standard jobstream.

The objective of the careful ordering of stage2 sysgen statements was careful ordering of files and pds members on disk. This represented about a 2/3rds improvement in elapsed time for typical univ. workload (nearly three times increase in thruput).

This is old post with part of presentation at fall '68 SHARE meeting in atlantic city
https://www.garlic.com/~lynn/94.html#18

mentions improvement that i did with careful sysgen for mft14, improvement that I did in cp67 with rewriting lots of the code, and mft14 performance on both "bare metal" and mft14 in cp67 virtual machine. Above mentions that as system "aged" and IBM maintenance/APARs were applied ... thruput would degrade (because of the way PDS member replacement occurs). If transition to next release (system build) was far enough off, I might have to rebuild current system to re-establish optimal disk arm seek.

A significant advance came with release 15/16 being able to specify the VTOC cylinder. VTOC is highest used location on disk and was on cylinder 0 ... and then I placed high use data starting next to VTOC (decreasing use at increasing cylinder locations). With release 15/16, I could place VTOC in the middle of the disk and place high use data radiating out from both sides of the VTOC.

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?

there has been comments that the only possible effective section of sarbanes-oxley was the section on informants (much of the rest was considered full employment gift to audit companies)

in congressional hearings into madoff, the person that had been attempting (unsuccessfully) for a decade to get SEC to do something about Madoff, testified that tips turn up 13 times more fraud than audits (and that while there may be need for new regulation ... it was much more important to change the culture to one of visibility and transparency).

GAO possibly believing (also) that SEC wasn't doing anything about public company fraudulent financial filings ... started looking at filings and generating reports about uptick in public company fraudulent financial filings (even after SOX, one might be sarcastic that SOX even encouraged fraudulent filings).

congressional hearings into the rating agencies ... had testimony that the rating agencies were "selling" triple-A ratings on toxic CDOs (when both the sellers and the rating agencies knew they weren't worth triple-A; this played pivotal role in the economic bubble/mess). SOX also had provision for SEC to look into the rating agencies ... but (again) nothing seemed to have been done. Some commentator made the observation that the rating agencies may be able to blackmail gov. and avoid any punitive action with the threat of downgrading the gov. credit rating.

real estate/mortgage industry had (annual) profit taken from (annual) mortgage payments (and loan quality) ... except for real estate commission skimmed off the top.

Mortgage originators being able to immediately unload as packaged toxic CDOs (and pay for triple-A ratings) ... eliminated any motivation to care about loan quality (and/or borrowers qualifications). It also allowed skimming off the top additional fees & commissions.

Reports are that the financial industry tripled in size (as percent of GDP) during the bubble. That approx. corresponds to 15% of the value of toxic CDO transactions done each year (during the bubble). Along with real estate commission, that implies approx 20% was now being skimmed off the top of real estate transactions ... and along with no attention to loan quality ... leaves little or no value in actually holding the toxic CDOs (possibly 20% real-estate hyper-inflation during the bubble helped hide the 20% being skimmed off the top, the industry loved speculators that would churn the transaction every year and contribute to the inflation)

The bubble bursts and the toxic CDO transactions supposedly disappear ... that means the corresponding 15% skim should disappear and the financial industry deflates by 2/3rds to its pre-bubble size (with corresponding deflation in stock value and contribution to market indexes). That hasn't happened, so even w/o direct knowledge of the federal reserve behind the scenes activity ... has to imply that something is going on to continue to prop up the industry.

Mortgage origination should also return to keeping the loan (and having to pay attention to loan quality and borrowers qualifications) and profit is off the annual mortgage payments (not skimmed off the top of the transaction). Real estate values also somewhat deflate to pre-bubble levels ... although large amount of over-building was done during the bubble as result of the significant speculation going on (over supply drag on value and market)

Somewhat corresponding to the 15% being skimmed off the top of real estate transactions (aka toxic CDOs, resulting in tripling the size of the financial industry) ... NY state comptroller reported that aggregate wall street bonuses spiked over 400% during the bubble (in theory also fed by the 15% off the top on toxic CDO transactions, and should also return to pre-bubble levels).

http://www.csmonitor.com/USA/2010/1201/Federal-Reserve-s-astounding-report-We-loaned-banks-trillions

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?

which part?

toxic CDOs? collateral debt obligations .... basically loans packaged as form of bonds ("mortgage backed securities"). they had been used in S&L crisis to obfuscate underlying value ... but w/o triple-A credit rating had very little market (selling packaged mortgages to 3rd parties who have hard time determining value of what is being bought).

Lots of retirement and other kinds of funds have mandates to only buy instruments with triple-A ratings. Being able to pay rating agencies for triple-A ratings (regardless of actual value) enormously increased market for toxic CDOs. When mortgage originators (people that wrote the loans) could immediately sell them off (to immense market with triple-A credit ratings) w/o regard to actual quality ... it eliminated any motivation for the mortgage originators to pay any attention to loan quality (their profit became solely dependent on how fast they could write loans and how large the loans were; taking a percentage as profit the total value of the mortgage).

The people taking fees & commissions on toxic CDOs (triple-A rated mortgage-backed securities that weren't worth triple-A) became motivated in having as many toxic CDOs as fast as possible. Real-estate speculators found no-documentation, no-down, interest only payments, 1% ARMs extremely attractive ... planning on flipping before rates adjusted. With real-estate inflation running 20-30% in some parts of the country, 1%, no-down no-documentation mortgages could represent 2000% ROI for real estate speculators.

$1m property costs speculator $10,000 on no-down $1m mortgage and sells after year for $1.2m (2000% ROI). mortgage originator packages 1000 $1m mortgages as $1B CDO. Between the mortgage originators and the other participants in CDO transaction, they take $150M (15%) on the $1B CDO. If all the speculators flip and repeat after a year ... it is another 2000% ROI for the speculators and another $150+M for the players in CDOs.

packaged as triple-A rated toxic CDOs provided nearly unlimited source of funds for writing such mortgages (fueling real estate bubble). This is equivalent to the "Brokers' Loans" that fueled the stock market inflation/bubble in the 20s ... resulting in the '29 crash. The bubble, crash and resulting downturn are very similar in the late 20s and the last decade. So far the big difference is the amount of stimulus the gov. (and federal reserve) has pumped in this time.

In early 2009, I was asked to HTML'ize the scan of the Pecora hearings (30s congressional investigation into the '29 crash, result included Glass-Steagall act) that had been done at the Boston Public library the previous fall.

one of my other hobbies is merged glossaries and taxonomies ... a few here:
https://www.garlic.com/~lynn/index.html#glosnote

I've done some generic work on security & fraud ... but not financial specific ... and some work on generic financial ... not fraud specific.

In security taxonomy/glossary ... i've done some work on categorizing exploit reports in the CVE database.

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#49 What do you think about fraud prevention in the governments?

there is estimate that total of $27T in toxic CDOs were done during the bubble ... at 20%, that amounts to $5.4T being skimmed off into various pockets.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

A significant part of the $27T was being "bought" by the investment banking arms (courtesy of repeal of Glass-Steagall) of four too-big-to-fail financial institutions (individuals getting such large compensation that it overrode any concerns what it might do to their institutions). End of 2008, it was estimated that the four too-big-to-fail institutions were carrying $5.2T in toxic CDOs "off-book". There were some early sales involving tens of billions that went for 22 cents on the dollar. If the institutions had to bring the toxic CDOs back on to their books, they would be declared insolvent and have to be liquidated. The legal action to have federal reserve disclose what it has been doing (went on for over a year), indicates that federal reserve has been quietly buying up the toxic CDOs at 98cents on the dollar (almost face value).

reference to aggregate $27T toxic CDO estimate during the bubble (some of it churn with speculators repeatedly flipping properties)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

reference to four too-big-to-fail institutions carrying $5.2T "off-book"
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Now, with gov. leaning over backwards to keep the institutions in business, they would hardly shut them down for something like money laundering. References to DEA following money trail used to purchase "drug smuggling" planes, back to too-big-to-fail institutions (and asking them to "please stop"):

Too Big to Jail - How Big Banks Are Turning Mexico Into Colombia
http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html
Banks Financing Mexico Gangs Admitted in Wells Fargo Deal
http://www.bloomberg.com/news/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal.html
Wall Street Is Laundering Drug Money And Getting Away With It
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html?show_comment_id=53702542
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/06/28/bloomberg1376-L4QPS90UQVI901-6UNA840IM91QJGPBLBFL79TRP1.DTL
How banks aided drug traffic
http://www.charlotteobserver.com/2010/07/04/1542567/how-banks-aided-drug-traffic.html
The Banksters Laundered Mexican Cartel Drug Money
http://www.economicpopulist.org/content/banksters-laundered-mexican-cartel-drug-money
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://www.globalresearch.ca/index.php?context=va&aid=20210
Wall Street Is Laundering Drug Money and Getting Away with It
http://www.alternet.org/economy/147564/wall_street_is_laundering_drug_money_and_getting_away_with_it/
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://dandelionsalad.wordpress.com/2010/07/23/money-laundering-and-the-global-drug-trade-are-fueled-by-the-capitalist-elites-by-tom-burghardt/
Global Illicit Drugs Trade and the Financial Elite
http://www.pacificfreepress.com/news/1/6650-global-illicit-drugs-trade-and-the-financial-elite.html
Wall Street Is Laundering Drug Money And Getting Away With It
http://institute.ourfuture.org/blog-entry/2010072814/megabanks-are-laundering-drug-money-and-getting-away-it

--
virtualization experience starting Jan1968, online at home since Mar1970

speculation: z/OS "enhancments"

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: speculation: z/OS "enhancments"
Newsgroups: bit.listserv.ibm-main
Date: 12 Jan 2011 13:52:10 -0800
m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
Bold assertion. Do you have any data to back that up?

360 allowed self-modifying instruction ... potentially instructions already in the pipeline. one of the claims for Amdahl "macro-code" was that it was essentially 370 with some tweaks ... including precluding self-modifying. even in some number of (non-cache, non-pipeline) 360s that did i-fetch a double word (or more) at time ... there was extra overhead in constantly checking whether there was storage alteration to address that was already fetched by the instruction unit (within the same double word).

"harvard" architectures with split instruction and data caches w/o cache consistency .... get some processing performance by i-cache ignoring standard storage alterations. in the case of "store-in" data cache ... program loaders that may operate on instruction streams ... making alterations that appear in d-cache. before initiating exection ... the loader then executes special operation that explicitly forces d-cache alterated lines to main storage and invalidates possible corresponding i-cache lines. with the force of altered data (from d-cache) to storage and invalidation of corresponding addresses in i-cache ... subsequent instruction stream references to those addressed would be forced to fetch the contents from storage (explicit programming required to support alteration of potential instructions).

"harvard" architecture with i-cache operation ignoring standard storage alterations ... is much easier to pipeline and easier to scale-up multiprocessor (not only being able to ignore storage alterations on the same processor ... but also able to ignore standard storage alterations from all the other processors).

--
virtualization experience starting Jan1968, online at home since Mar1970

speculation: z/OS "enhancments"

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: speculation: z/OS "enhancments"
Newsgroups: bit.listserv.ibm-main
Date: 12 Jan 2011 17:09:54 -0800
m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
Dr. Amdahl was the chief architect of the System/360. He left IBM and started Amdahl corporation because he wanted to extend the series with a higher performance processor and IBM did not think that it would be marketable. And BTW, it was not a "copy". It was a compatible processor, but it was implemented differently.

during the ill-fated Future System effort ... anything that smacked of internal competition was killed ... resulted in 370 product pipeline going dry ... which is attributed with allowing clone manufactures to gain market foothold.

in the wake of Future System demise there was mad rush to get products back into the 370 product pipeline ... with 3033 going on in parallel with 3081/370xa effort.

The 3033 started out as 168 wiring diagram mapped to 20% faster chips ... which also had 10 times circuits/chip (with only 10 percent being used) ... however, there was eventually some redesign before 3033 shipped to better utilize higher density on-chip ... eventually resulting in being 50% faster.

This talks about 3081 using some of the FS technology resulting in circuits/mip (and therefor manufacturing costs) being significant higher than clone processors:
http://www.jfsowa.com/computer/memo125.htm

other details about FS mentioned here (including the FS failure casting dark shadow over the corporation for decades)
https://people.computing.clemson.edu/~mark/fs.html

there are claims that Amdahl knew nothing about the FS effort (occurring after he left). however he gave a talk at MIT in large auditorium in the early 70s on his new company. During the talk he was asked how he convinced investors to fund his company. He said something about customers had already invested couple hundred billion in 360-based software and even if IBM were to completely walk away from 360 (could be considered a veiled reference to FS), that was large enough install base to keep him in business through the end of the century.

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Jan, 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#49 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?

note that the illegal drug stuff is skimming hundreds of billions ... while financial skimmed trillions (order of magnitude difference) ... illegal drugs also have relatively higher operating costs compared to financial (a major expense "lobbying" congress).

To go with Freakonomics, for military see recent book "America's Defense Meltdown"
https://www.amazon.com/Americas-Defense-Meltdown-President-ebook/dp/B001TKD4SA

... for financial see "Griftopia; Bubble Machines, Vampire Squids, and the Long Con That is Breaking America"
https://www.amazon.com/Griftopia-Machines-Vampire-Breaking-ebook/dp/B003F3FJS2

and "13 Bankers: The Wallstreet Takeover and the Next Financial Meltdown"
https://www.amazon.com/13-Bankers-Takeover-Financial-ebook/dp/B0036S4EIW

I've read all of the above on Kindle (download via wifi)

and some recents posts about drug crime vis-a-vis cyber crime
https://www.garlic.com/~lynn/2010o.html#14
https://www.garlic.com/~lynn/2010o.html#20
https://www.garlic.com/~lynn/2010p.html#31
https://www.garlic.com/~lynn/2010p.html#40

and with regard to:

James K. Galbraith: Why the 'Experts' Failed to See How Financial Fraud Collapsed the Economy
http://www.alternet.org/economy/146883/james_k._galbraith:_why_the_%27experts%27_failed_to_see_how_financial_fraud_collapsed_the_economy?page=entire

another take ... is business people told the risk people to keep fiddling the inputs until they got the output they wanted (GIGO)

How Wall Street Lied to Its Computers
http://bits.blogs.nytimes.com/2008/09/18/how-wall-streets-quants-lied-to-their-computer

and

Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/

how many would turn down $100M ... even if they knew that it would be part of taking down the country's economy ... especially if they knew that they could later plead to be totally incompetent and be able to walk away with all the money.

the part in the middle was having large supply of mortgages that was the fuel for the toxic CDO transactions (that could be skimmed ... somewhat like stock portfolio churning). part of enabling the toxic CDO transactions was being able to pay for the triple-A ratings (enormously increasing the market for the toxic CDOs).

one of the downsides of mortgage originators being able to immediately unload every loan they wrote ... was they no longer had to be concerned about loan quality (paying for triple-A rating allowed being able to immediately dump every mortgage onto somebody else). article about somebody getting punished by the market for raising the issue in 2003:
http://www.forbes.com/forbes/2008/1117/114.html

on the front side of the toxic CDO transactions were real estate speculators helping really boost the flow of mortgages (fuel the toxic CDO transactions) ... basically turning the real-estate market into the equivalent of the '29 stock market. with the real-estate market playing such a large role in the economy ... the real-estate market speculation bubble and collapse results in effects spreading through-out the economy and all sorts of collateral damage.

yet another view of "Galbraith's" article is that the regulations damped down the individual pockets of greed and corruption ... with relaxation/repeal of regulations (including repeal of Glass-Steagall) ... the individual pockets of greed and corruption were able to combine together into fire-storm.

One of the scenarios in "Griftopia" was that the commodity markets had requirement that participants have significant position in the commodity because speculators resulted in wild, irrational price swings. There were then 19 "secret" letters allowing exemptions for specific entities (resulting in wild, irrational price swings).

Corresponding to the griftopia account of commodity markets knowing that speculators result in wild irrational price swings:

The Crash Of 2008: A Mathematician's View
http://www.sciencedaily.com/releases/2008/12/081208203915.htm
The crash of 2008: A mathematician's view
http://www.eurekalert.org/pub_releases/2008-12/w-tco120808.php

--
virtualization experience starting Jan1968, online at home since Mar1970

speculation: z/OS "enhancments"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: speculation: z/OS "enhancments"
Newsgroups: bit.listserv.ibm-main
Date: Thu, 13 Jan 2011 09:11:12 -0500
ibmsysprog@GEEK-SITES.COM (Avram Friedman) writes:
In case I wasn't clear the first time around user enhancement requests play almost no role in the design and build of new products and or versions. New products and or versions are historically driven by GENERAL requirements for data processing espically database. In todays "IBM 360" world interopererability is also very important as a way to slow the shrinking market.

in 360 days ... lots of new products were picked up from datacenters where they had been locally developed ... and were then transferred to development/product groups for maintenance (subsequent new releases were typically small incremental changes, +/- 5% difference) ... i.e. cics, ims, hasp, etc.

original rdbms/sql was research system/r ... which eventually was transferred to endicott for sql/ds (early 80s). one of the people mentioned in this jan92 meeting in ellison's conference room, claimed credit for transfer of sql/ds back to STL for DB2
https://www.garlic.com/~lynn/95.html#13

sort of long way around since bldg 28 & 90 were less than 10miles apart ... i use to ride my bike back and forth

past posts mentioning system/r
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

America's Defense Meltdown

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Jan, 2011
Subject: America's Defense Meltdown
Blog: Boyd Strategy
America's Defense Meltdown
https://www.amazon.com/Americas-Defense-Meltdown-President-ebook/dp/B001TKD4SA

I've finally got around to reading "America's Defense Meltdown" (on kindle) ... which makes periodic references to Boyd and OODA-loops. One of things they started out early was the enormous influence that (pentagon) military/industry complex has on congress. There was old comment on d-n-i.net from two years ago (the dna of corruption):

While the scale of venality of Wall Street dwarfs that of the Pentagon's, I submit that many of the central qualities shaping America's Defense Meltdown (an important new book with this title, also written by insiders) can be found in Simon Johnson's exegesis of America's even more profound Financial Meltdown.

... snip ...

i.e. "13 Bankers: The Wallstreet Takeover and the Next Financial Meltdown"
https://www.amazon.com/13-Bankers-Takeover-Financial-ebook/dp/B0036S4EIW

and more recent book on the same subject:

Griftopia; Bubble Machines, Vampire Squids, and the Long Con That is Breaking America
https://www.amazon.com/Griftopia-Machines-Vampire-Breaking-ebook/dp/B003F3FJS2

aka ... there are various references about enormous spending on lobbying the financial industry has been doing ... contributing to several comments about congress being the most corrupt institution on earth.

old post referencing the "dna of corruption"
https://www.garlic.com/~lynn/2009e.html#53 Are the "brightest minds in finance" finally onto something?

.. and comment from yesterday (on Scott Shipman's reference to Breaking the Command-and-Control Reflex)

Just reading America's Defense Meltdown (on kindle) ... section how business school theory to rotate (fast track) commanders/executives quickly thru lots of different positions destroyed unit cohesiveness. In business I've recently told a number of stories of business units nearly being destroyed by having their upper management position identified as "fast track" (i.e. new replacement potentially every six-12 months)

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 10:12:34 -0500
Charles Richmond <frizzle@tx.rr.com> writes:
Why do the modern hard disks spin faster??? What about the technology allows that???

smaller (somewhat enabled by higher recording density, also lighter),

smaller diameter results in smaller circumference, circumference (pi*radius**2) is the distance that outer edge travels in single revolution. 60revs per second (3600rpms/min) ... outer edge velocity is 60*(pi*radius**2)/sec.

14in disk to 2.5in disk ... cuts radius from 7 (7**2=49) to 1.25 (1.25**2 = less than 1.6) or outer circumference/velocity is cut by factor of little over 30 times (kids playground thing that rotates around ... force on the kids is greatest on the outer perimeter).

or outer edge of 14in disk at 3600rpm has little over 30times the velocity of the outer edge of 2.5in disk at same RPM (brain check with circumference and area ... noted below)

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 10:24:47 -0500
Ahem A Rivet's Shot <steveo@eircom.net> writes:
Almost certainly true, the head assembly is a lot lighter than that of a 1960s mainframe drive.

re:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks

recent posts about air bearing simulation work for design of thin-film heads ... transition to significantly lighter heads ... easier to move track to track.
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011.html#36 CKD DASD

old email about original 3380 inter-track gap being reduced from 20track widths to 10track widths (double the number of tracks per platter) ... better servo control
https://www.garlic.com/~lynn/2006s.html#email871122

above also mentions "vertical recording" ... and old email about some experimental work on "wide" head that read/wrote tracks in parallel
https://www.garlic.com/~lynn/2006s.html#email871230

in this old post
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 10:38:14 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
smaller (somewhat enabled by higher recording density, also lighter),

smaller diameter results in smaller circumference, circumference (pi*radius**2) is the distance that outer edge travels in single revolution. 60revs per second (3600rpms/min) ... outer edge velocity is 60*(pi*radius**2)/sec.

14in disk to 2.5in disk ... cuts radius from 7 (7**2=49) to 1.25 (1.25**2 = less than 1.6) or outer circumference/velocity is cut by factor of little over 30 times (kids playground thing that rotates around ... force on the kids is greatest on the outer perimeter).

or outer edge of 14in disk at 3600rpm has little over 30times the velocity of the outer edge of 2.5in disk at same RPM.


re:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks

oops, brain slip with circumferance and area ... not finished coffee (after late night on computers) ... aka 14in to 2.5in ... is just factor of 5.6 in circumference velocity (still making it much easier to double from 3600 rpm to 7200 rpm)

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 11:04:21 -0500
Tim Shoppa <shoppa@trailing-edge.com> writes:
In the 70's and early 80's, a typical data rate from a drive was a few megabits/sec. By the mid 80's the high performance drives (often using SMD-type interfaces still) were up to the low teens of megabits/sec. The RPM had not substantially increased (well OK, a Fuji Eagle was spinning at nearly 4000 RPM which was a little higher than a decade before) but the bits per inch on the media was higher. And today the read channel is working at circa a gigabit/sec.

re:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#58 Speed of Old Hard Disks

early 80s, 3380s went to 3mbytes/sec. part of the change was introduction in "data streaming" for mainframe channel cables. Previously there was end-to-end handshake for every byte transferred. This limited max cable distance to 200ft (as well as impacting transfer rate). Data streaming introduced being able to tranfer multiple bytes per end-to-end handshake ... doubling both max. cable distance to 400ft and data rate to 3mbytes/sec ... from around 300kbytes/sec (factor of ten times at same RPM) ... except for some special disks that would do 1.5mbytes/sec ... but had much reduced channel cable length restrictions.

problem in late 70s were many large datacenters were having problems connecting their large disk farms with the 200ft limitation (place processor in the middle of datacenter with all disks having to be within the area of the 200ft radius. doubling the max channel distance to 400ft ... quadrubled the area that disks could be placed (some datacenters had been starting to transition to multi-floor operation ... turning the limitation from area of circle to area of a sphere).

mainframe eventually introduced ESCON channel ... fiber/serial channel at 200mbites/17mbytes/sec. mainframe disks went to 4.5mbytes/sec ... there was some increase in area that could be covered by ESCON channel ... but it still simulated the earlier half-duplex end-to-end handshake (limiting both max. distance and max. transfer rate and only one direction at a time).

late 80s/early 90s, other was HiPPI ... basically standardization of 100mbyte/sec cray channel, serial-HiPPI (running over fiber optics), FCS (1gbit/sec fiber), and SCI. FCS & SCI tended to do asynchronous protocls ... rather than simulating half-duplex parallel ... so they could support concurrent peak transfer in both directions). These would have various kinds of RAID involving simulataneous transfer from multiple disks concurrently.

started to see various flavors of serial ... over both fiber and copper ... with asynchronous transfer in both directions ... in some cases using "packetized" flavors of parallel/half-duplex disk commands.

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 12:02:01 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
recent posts about air bearing simulation work for design of thin-film heads ... transition to significantly lighter heads ... easier to move track to track.
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011.html#36 CKD DASD

old email about original 3380 inter-track gap being reduced from 20track widths to 10track widths (double the number of tracks per platter) ... better servo control
https://www.garlic.com/~lynn/2006s.html#email871122

above also mentions "vertical recording" ... and old email about some experimental work on "wide" head that read/wrote tracks in parallel
https://www.garlic.com/~lynn/2006s.html#email871230

in this old post
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?


re:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#58 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#58 Speed of Old Hard Disks

mentions 1980, 3380 with "thin film" (and air bearing) ... has head flying much closer to surface ... enabling the higher data rate (3mbytes/sec):
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_chrono20.html
more details:
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380.html

hard disk wiki
https://en.wikipedia.org/wiki/Hard_disk_drive

and to help offset upthread brain check

circumference mph .... ((diameter*pi)*3600*60)/(12*5280)
circumference diameter mph (@3600rpm) 14 150mph 8 86mph 5.25 56mph 3.5 37mph 2.5 27mph

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jan 2011 16:07:35 -0500
andrew@cucumber.demon.co.uk (Andrew Gabriel) writes:
Out of interest, here's a table of relative performance changes I use in a presentation I give on filesystem performance from time to time [fixed font]...

25 years ago Today Improvement

Rotational speed 3,600 15,000 4x I/O's per sec 30 300 10x Transfer rates 1 MB/s 100 MB/s 100x Capacity 150 MB 1.5 TB 10,000x

CPU performance 4 MIPS 400,000 MIPS 100,000x


In the mid-70s, I was starting to observe that relative system disk thruput performance was getting much worse. In the early 80s, I used this comparison (between late 60s and early 80s; from old posts):
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts

with the comment that the relative system disk thruput had declined by an order of magnitude in the period (disks got faster ... but other parts of the system got significantly faster). Disk executives took exception and assigned the division performance group to refute my statements ... after a few weeks they came back and essentially said that I had slightly understated the problem.

They respun the analysis and turned it into a SHARE presentation on how to organize disks for system thruput. old post with small piece of presentation B874 at SHARE 63:
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

misc. collected posts mentioning getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

past posts in this thread:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#58 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#59 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks

--
virtualization experience starting Jan1968, online at home since Mar1970

SIE - CompArch

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Jan, 2011
Subject: SIE - CompArch
Blog: IBM Historic Computing
SIE - CompArch semipublic.comp-arch.net
http://semipublic.comp-arch.net/wiki/SIE

SIE was introduced for virtual machine operation with 3081 and 370/xa. This is old email that mentions short comings of 3081 implementation and enhancements done for 3090
https://www.garlic.com/~lynn/2006j.html#email810630
passing SIE reference in this discussion of cache design
https://www.garlic.com/~lynn/2003j.html#email831118

DIL provided base+bound address relocation (add base to all addresses and maximum address checked against bound).

before virtual memory was announced on 370 ... an IBM SE on boeing account took a version of CP67 and replaced the "paging" & "virtual memory" support ... available on 360/67 with "swaping" (whole contiguous address space) & DIL base+bound (address relocation). CP67 still got all the privilege instruction interrupts for simulation.

Initial morph of cp67 to vm370 (supporting paging and virtual memory on 370) still got privilege instruction interrupts for simulation.

Initially on 158 ... virtual machine "microcode" assist was added .. load special value into real control register six ... and some number of 370 privilege instructions would be executed directly (in microcode) using virtual machine rules (different than if "real" supervisor/privilege instruction was executing on bare hardware)

SIE instruction came out for 3081/XA ... which switched into virtual machine mode with nearly all supervisor instructions now having execution support for virtual machine mode.

In response to Amdahl's "HYPERVISOR" ... there was 3090 PR/SM developed ... which partitioned machine using subset of virtual machine function. It started out with something akin to DIL ... contiguous (dedicated) area of real storage for each "partition" (w/o any paging or swaping). Then a flavor "SIE" (virtual machine mode) capability for supervisor instructions was used allowing operating systems to run in each partition. PR/SM was generalized to larger number of partitions as LPARs.

PR/SM
http://publib.boulder.ibm.com/infocenter/eserver/v1r2/topic/eicaz/eicazzlpar.htm

LPAR
http://publib.boulder.ibm.com/infocenter/zos/basics/topic/com.ibm.zos.zmainframe/zconc_mfhwlpar.htm

over the years there has been some issue with conflict/evolution of "SIE" capability between LPARS(PR/SM) and z/VM (if SIE function was being used by LPAR capability, can z/VM use it also)

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 10:44:51 -0500
Tim Shoppa <shoppa@trailing-edge.com> writes:
RK05 carts are IBM2315 style, which originated in the 1964. There were many IBM 2315-style cartridge drives made by many manufacturers in the late 60's and 70's. The Diablo 31 was used on many mini's. All incarnations that I know of, spun at 1500 RPM.

lots of them staffed by former senior engineers of the san jose plant site. recent tale about getting roped into conference calls (in late 70s) with "channel" engineers back east:
https://www.garlic.com/~lynn/2011.html#37 CKD DASD

and eventually being told that such stuff would normally have been handled by the senior disk engineers ... but so many had left for startups in the late 60s and early 70s.

misc. posts getting to play disk engineer in bldgs. 14&15 ... recent online sat. photos of the plant site show many bldgs. have been plowed under ... but (at least) 14&15 remain (use 5600 cottle rd, san jose, ca ... for lookup).
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Two terrific writers .. are going to write a book

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 15 Jan, 2011
Subject: Two terrific writers .. are going to write a book
Blog: MainframeZone
misc. from new (linkedin) "IBM Historic Computing"

this may be also of some interest ... starting before "Future System" thru 1995
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

and of course Melinda's history
http://www.leeandmelindavarian.com/Melinda/
above recently moved to:
http://www.leeandmelindavarian.com/Melinda#VMHist

there has been a lot of work done here (both ibm and non-ibm)
http://www.cs.clemson.edu/~mark/hist.html

quite a large number of IBM references. Includes section on Future System (which was massive effort to completely replace 370 ... but was killed before being announced):
https://people.computing.clemson.edu/~mark/fs.html

misc. other

reference to IBM ACS (61-69) with lots of detail
https://people.computing.clemson.edu/~mark/acs.html
lots of interesting technical detail
https://people.computing.clemson.edu/~mark/acs_technical.html

... and ibm stretch details
http://www.cs.clemson.edu/~mark/stretch.html

for the fun of it ... some past posts from last year about 701 was "defense calculator" and only 19 sold ... while 650 sold 2000:
https://www.garlic.com/~lynn/2009c.html#35 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009d.html#39 1401's in high schools?
https://www.garlic.com/~lynn/2009h.html#12 IBM Mainframe: 50 Years of Big Iron Innovation
https://www.garlic.com/~lynn/2009j.html#39 A Complete History Of Mainframe Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 11:47:37 -0500
Joe Thompson <spam+@orion-com.com> writes:
Back in the day, if you low-level formatted a Mac hard drive (probably others too, but I was a Mac geek exclusively at the time), it was essential to know the correct "interleave ratio" for your drive. The idea was that successive blocks needed to be distributed around the disk for optimal read times. Slow controllers or fast platters required a 3:1 ratio, faster controllers/slower platters could do 2:1, and if you had *really* nice gear it could handle 1:1. -- Joe

there was somewhat analogous issue with 3330s in the 70s. The controllers were fast enough to read successive records on the same platter ... however, for many operations, 3330s were "formated" such that records were formated identically on all 19 platters at the same head position. in some cases, there was request for record at the next rotational position ... but on different platter. To switch platters required additional command in the channel I/O program ... the end-to-end latency and command processing could result in disk rotating past the start of the start of the next record.

to mask such processing latency ... the platters were formated with short, dummy records (between the data record) ... adding extra rotational latency to cover the elapsed time for the extra command processing. The problem was that there was some amount of variability in such command processing between the "original" vendor controller vis-a-vis various clone vendor products ... different processor models had channel processing that operated at difference thruput.

a major thruput sensitive activity was paging operations with 4k byte records ... and the 3330 track allowed for three such records per track with only enough room for 101-byte dummy "spacer" record (for the additional rotational spacing) ... official specs required 110 byte spacer record to provide rotational latency for platter change I/O command processing.

Some models of 370 had channels that would process the command faster and could be performed within the rotational latency of 101-byte spacer record ... and some of the clone vendors also had faster controllers that operation could be performed within the additional rotational latency.

one of the issues was 370/158 (integrated) channels had higher latency and platter switch couldn't be done reliable with standard controller in combination with 158 channel processing (although some of the clone vendor controllers in combination with 158 channel could perform the switch within the rotational latency).

for all the 303x processors, channel processing was done by a "channel director" ... which was actually a repackaged 370/158 integrated channel with different covers (and couldn't perform the switch within the latency). The 3081 channels also were particularly slow and couldn't perform the switch in the 3330 rotational latency.

misc. past posts on the subject:
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2002b.html#17 index searching
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
https://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
https://www.garlic.com/~lynn/2009p.html#12 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets

past posts in this thread:
https://www.garlic.com/~lynn/2011.html#56 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#58 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#59 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#61 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#63 Speed of Old Hard Disks

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 12:05:17 -0500
"Joe Morris" <j.c.morris@verizon.net> writes:
Those here who supported the original HASP-II will recall that the HASP queue was formatted with interleaved records to reduce the read/write overhead. I also recall that ERP didn't do a good job of handling recoverable I/O errors when the checkpoint record spanned two cylinders...the result was the destruction of the record. What I can't recall is whether this was partly due to using an OEM block mux channel with Ampex 3330s on a 360/65 or whether it predated the channel's arrival.

mainframe dasd had problem for a long time with power loss/drop ... data to be written came from processor memory ... and then started long transit out thru channel cables to the controller and finally to the disk surface. a power outage could drop mainframe memory while there was still enough power in the controller to finish a record write operation. this resulting in valid record data coming from processor memory turning to zeros ... with the controller finishing the write operation with zeros coming in over the interface ... and writing correct error recovery information (ecc for the trailing propagated zeros).

One of the partial countermeasures by the CMS "filesystem" was control information was always written to new record position ... and then the "superblock" was rewritten to reflect the new copy ... vis-a-vis the old copy. one of the additions by the "new" CMS filesystem in the late 70s was having a pair of "superblocks" ... with writes ping-pong back and forth between the pair. recovery restart after outage would look at trailing area of both blocks to decide which is good/recent version.

power outage during writing of any new control information ... would mean that the superblock wouldn't be update (to reflect the new/updated control information). in the (late 70s) "new" CMS filesystem ... power outage during the writing of the superblock (with propagated zeros) ... would mean that the trailing validaty information wouldn't be valid ... and the previous/altnerate version of the superblock would be used (and the old control information).

for at least the past 20-30 years ... there has been issue whether disk record writes are immune from trailing propagated zeros in record (with no error indication).

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 12:43:41 -0500
re:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#61 Speed of Old Hard Disks

one of the things being done to compensate for the continued deline in relative system disk thruput ... was increasingly using various kinds of electronic storage for various kinds of caching.

per disk, full=track buffer was also being used to compensate for various kinds of issues with disk rotation.

in the early 80s, 8mbyte caches were added to 3880 disk controllers. there were two models, 3880-11 which was a 4k record caching (targeted at paging and other 4k oriented operation) and 3880-13 which was full-track cache.

one of the problems with the 4k record 8mbyte cache ... was that it was relatively small compared to mainframe memories. when there was page fault ... there would be a read for missing page ... which would get cached iin the 3880-11 and also in main memory. Since the main memory and 3880-11 caches were approx. the same size ... it would be very rare for a page fault to occur (for page not in main memory) that would not also be in the 3880-11 cache (being approx. the same size, effectively every page in the 3880-11 would also be in main memory, so there would never be a page fault for a page not in main memory that wasn't also not in 3880-11).

I proposed a 3880-11 management strategy I called "no-dup" ... all page read operations (from the mainframe) would be "distructive" ... as a result no page read into main memory would remain in 3880-11 cache. the only way for a page to get in 3880-11 cache was for a mainframe write (for page being replaced in main memory ... and no longer needed) ... where it would be written to disk with a copy being retained in the cache.

The default/original "dup" strategy ... was effectively every page in 3880-11 was a duplicate of a page in main memory (and 3880-11 cache sizes weren't sufficient to have any pages that weren't also in main memory). The "no-dup" (no duplicate) strategy sort of had the 3880-11 cache as an auxiliary extension of main memory.

misc. past posts mentioning no-dup strategy:
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2008h.html#84 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008k.html#80 How to calculate effective page fault service time?
https://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 13:30:29 -0500
re:
https://www.garlic.com/~lynn/2011.html#67 Speed of Old Hard Disks

disk caches will only have stuff that was previously read into real storage. for it to be needed again, the system will need to have replaced it with something else (and not enough room for both) ... the criteria for replacement tends to be age. with disk caches similar size of main memory ... they will also have similar problems with new stuff being read which needs to replace old stuff ... and uses similar strategy being used by main processor. As a result anything that the main processor has replaced and needed again ... the disk cache will also have been replaced the same data. net is that everything in the cache is duplicate of real storage ... and nothing more ... unless disk cache is significantly larger than processor real storage (being managed by similar strategies).

so the 3880-11 was 8mbyte 4kbyte record cache ... the 3880-13 was 8mbyte full track cache ... and from caching stand point suffered similar dup/no-dup strategy shortcomings. however, they were advertising the 3880-13 was achieving 90% cache "hit" ratio.

the measurement was sequentially reading application with 3380 track formated with 10 4k blocks. the 1st 4k block read (on track) resulted in miss and whole track being read ... the next nine 4k record sequential reads were "hits". the issue was that it wasn't a real cache issue ... but effectively a full-track buffer "pre-read".

now lots of mainframe access methods allowed changing control specification to do full-track reading (with no application chance). doing sequential access method full-track read would drop cache hit from 90% to zero percent. In effect, a simple full-track read buffer would have shown the same results as an 8mbyte cache (a real cache not adding a while lot more than a simpler full-track buffer).

misc. past posts mentioning 3880-13
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#49 VTOC position
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 "Detrimental" Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003i.html#72 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005t.html#50 non ECC
https://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#14 virtual memory
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006v.html#31 MB to Cyl Conversion
https://www.garlic.com/~lynn/2007e.html#10 A way to speed up level 1 caches
https://www.garlic.com/~lynn/2007e.html#38 FBA rant
https://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2008i.html#41 American Airlines
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
https://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#51 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 13:34:42 -0500
Walter Bushell <proto@panix.com> writes:
The other thing is the enormous increase in data density which allows us to use smaller disks. Low latency on a track is of little value, if there is not much data on a track. IOW, modern disks have higher rpms (angular velocity) because they have higher data density.

vertical recording, smaller inter-track spacing, and heads flying closer to the surface ... upthread posts
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks

referencing old email from the 80s
https://www.garlic.com/~lynn/2006s.html#email871122
and
https://www.garlic.com/~lynn/2006s.html#email871230

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 17:10:49 -0500
3880 11/13 cache discussion
https://www.garlic.com/~lynn/2011.html#67 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#68 Speed of Old Hard Disks

old posts with references (and old email) to DMKCOL ... a facility we added to VM370 that would capture every disk record accessed (aka COLlect) with some fancy stuff that drastically reduced the collection & reduction overhead:
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset

the SJR/VM system included I/O reliability enhancements (I rewrote the I/O supervisor for the disk engineering & product test labs in bldgs. 14&15 ... to be never fail/hang ... so they could do multiple, concurrent, on-demand testing; earlier efforts that attempted to use MVS found MVS had 15min MTBF in that environment) ... and included being installed in Tucson (where 3880 cache development work was being done) ... old email ref:
Date: 07/11/80 09:50:34
From: wheeler

let's make sure we get the tape off to Tucson today. Turns out there is some mods. in the system to record all DASD records accessed. That info. is then available to a reduction program to do things like simulate effect of I/O cache. Code has been in the system for some time, but Tucson's system lacks the command to enable it. Somebody in Tucson is looking at it for putting out as a product.


... snip ... top of post, old email index

SJR had written a "cache" simulator that used real-live DMKCOL data as input (production VM systems from several locations in the San Jose area ... but also 2nd level data from production MVS systems running under VM370). As mentioned in the previous descriptions of the simulator one of the results was given the same aggregate amount of electronic storage, it was more efficient to deploy it as a single large system cache ... rather than partitioning it into various dedicated caches (like per channel, controller, and/or disk).

The global vis-a-vis local LRU was independent of the dup/no-dup strategies with regard to managing duplicate cache entries at different levels in a multi-level cache "hierarchy".

This is basically the identical results found with using global LRU replacement strategy versus a (partitioned) "local LRU" replacement strategy. This old post mentions getting dragged into an academic festouche (over awarding a stanford phd for work on global LRU) ... old post
https://www.garlic.com/~lynn/2006w.html#46
with old communication trying to help out
https://www.garlic.com/~lynn/2006w.html#email821019

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 20:36:51 -0500
re:
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks

the following "call trick" reference is the os/360 convention for relocatable adcons ... these "relocatable" address constants are somewhat randomly distributed throughput program ... and before program can be executed ... the image has to be preloaded to fixed location in address space and all the address constants have to be swizzled to correspond to the loaded location. this creates a lot of extra overhead and also precludes being able to directly map program image to address space (w/o the swizzling) and makes it impossible to map same executable image to different locations in different address spaces ... lots of past posts about the problem
https://www.garlic.com/~lynn/submain.html#adcon

DMKCOL is the collection routine. The high performance reduction/analysis process ... was being looked at adding to system dynamic allocation ... deciding where to place new files in large disk farm (potentially involving hundreds of disk) for load balancing.
Date: 06/07/84 07:45:21
From: wheeler

re: call trick; of course it eliminates the situation where 1) loader has to pre-fetch all program pages to resolve RLDs and/or 2) excessive complexity in the paging supervisor to verify if anybody has a "page fetch appendage" active for a specific page (i.e. check to see if somebody has to be called following a page fetch, but prior to returning control).

re: xxxxxx; one of the references is to some DMKCOL work around 1979-1980. A special modification was made to CP to globally monitor all DASD record references and pass the info. to a specific virtual machine. xxxxxx developed algorithms and a program which could reduce the data in real-time (with minimal/negligable overhead). The activity at the time was to output the data to a cache model program which modeled hit ratios for various cache architectures. Size and placement of different caches were compared (total amount of all caches and/or size of individual caches ... including global CEC cache, cache at the channel, cache at the control unit, cache at the drive, etc).

One of xxxxxx's claims was that since his algorithm carried negligible overhead and ran in real-time, a global file sub-system could run in constantly ... using the data to constantly maintain optimal record placement.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jan 2011 22:52:34 -0500
despen writes:
Yep, the ADCONS have to be unique to the address space. As is done with Writable Static and LE.

lots of past posts about the problem
https://www.garlic.com/~lynn/submain.html#adcon

Broken link.


re:
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks

finger slip ... "main" not mail:
https://www.garlic.com/~lynn/submain.html#adcon

note that tss/360 had at least got adcons right ... even if they had huge problems getting the rest right.

a little recent tss/360 x-over
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2011.html#44 CKD DASD

melinda's history talks a bit about science center wanting the "virtual memory operating system" charter ... but it went instead to the tss/360. there was comment from the science center about tss/360 should look into what they were doing in somewhat more depth because Atlas had done demand paging and "it was known to not be working well".

melinda is moving from:
http://www.leeandmelindavarian.com/Melinda/
to:
http://www.leeandmelindavarian.com/Melinda#VMHist

the adcon stuff was done in conjunction with having done a paged-mapped filesystem for CMS on cp67 and then moved to vm370 ... recent reference also in the "Personal histories" post.
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks - adcons

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks - adcons
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2011 11:41:38 -0500
despen writes:
I'm trying to figure out what TSS did. Both you and Lynn referred to TSS. I never used it myself.

As far as I can tell:

http://www.bitsavers.org/pdf/ibm/360/tss/Z20-1788-0_TSS360_concepts.pdf
Page 11-12.

TSS is a bit different. It looks like the machine had a "relocation mode". So Adcons were "R" type Adcons and relocation took place at execution time. This allowed the RCONS to be part of the shared code between address spaces but introduced a whole additional layer of complexity to the machines architecture.

Strange that the problem remains unsolved in today's IBM mainframes and is only getting worse since C has been introduced to the mainframe and COBOL has been introduced to pointers.


re:
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#72 Speed of Old Hard Disks

360/67 was effectively 360/65 with virtual memory added ... similar to and predating virtual memory available later on 370s.

described in some detail in 360/67 functional characteristics:
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

when TSS/360 got the virtual memmory ... time-sharing mission, the science center started its own effort ... doing both virtual memory and virtual machine support; initially "cp/40" implemented on custom modified 360/40 (they originally wanted 360/50 ... but since so many of "spare" 360/50s were going to FAA for ATC system, they had to settle for 40). later they got standard 360/67 (officially for tss/360) and cp/40 morphed into cp/67 (& cms).
http://www.bitsavers.org/pdf/ibm/360/cp67/

Melinda's history goes into some amount of the details (ctss, project mac, virtual machines, etc);
http://www.leeandmelindavarian.com/Melinda#VMHist

past posts mentioning science center (4th flr, 545 tech sq)
https://www.garlic.com/~lynn/subtopic.html#545tech

lots of univ. got 360/67 in anticipation of running tss/360. tss/360 encountered large number of problems ... and very few installations actually used them for tss/360. many just used the machine as 360/65 with standard os/360 ... ignoring the virtual memory mode.

there were some installations that developed their own software for virtual memory mode ... including Stanford Orvyl
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron

and Univ of Michigan MTS ... several refs here:
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform

Boeing Huntsville got a 2-processor (duplex) multiprocessor 360/67 ... originally for tss/360 ... but dropped back to using it with os/360. Boeing Huntsville were using it with long-running graphics 2250 display applications ... and os/360 had enormous problem with storage fragmentation with long-running application. Boeing did a hack to OS/360 MVT (release 13) that put os/360 into virtual memory mode ... didn't support demand paging ... just used virtual memory mode to re-organize memory into contiguous storage (as countermeasure to the storage fragmentation problem). Recent mention of Boeing Huntsville machine being moved to seattle while I was there setting up cp67 for early formation of BCS ... part of topic drift in this post:
https://www.garlic.com/~lynn/2010q.html#59 Boeing Plant 2 ... End of an Era

for other drift ... mention of later tss/370 ... group had been cut way back & were supporting small number of customers. One of the things that kept tss/370 going on 370 was special bid product for AT&T ... involving stripped down tss/370 kernel (SSUP) which had unix layered on top ... recent mention about looking at SSUP as basis for other things (besides unix):
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
discussed in this older post:
https://www.garlic.com/~lynn/2001m.html#53 TSS/360

misc past posts mentioning SSUP
https://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010h.html#61 (slightly OT - Linux) Did IBM bet on the wrong OS?
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010l.html#2 TSS (Transaction Security System)
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL

--
virtualization experience starting Jan1968, online at home since Mar1970

shared code, was Speed of Old Hard Disks - adcons

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: shared code, was Speed of Old Hard Disks - adcons
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2011 12:36:34 -0500
John Levine <johnl@iecc.com> writes:
The shared code hackery is described starting on page 38, with CSECTS (R/O shared code) and PSECTS (R/W per-processes data) and the V-cons and R-cons that pointed to them. The V and R cons were just a software convention, no hardware support. It did have a dynamic loader which let you start running your program, and the first time it referenced a symbol, it trapped to the dynamic loader which loaded and relocated the routine.

TSS was incredibly buggy, but I can report from experience that the loader worked fine.


re:
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#72 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons

vm370 cms was initially configured to work with 370 "64kbyte" shared segments. cms virtual address space started with segment 0 being non-shared and segment 1 being shared, cms kernel code. application spaced started at segment 2 (nominally non-shared) and grew upwards (from x'20000').

370 virtual memory architecture originally included "segment protect" ... in the virtual memory tables, it was possible to specify a segment as r/o protected (different addresses spaced could share the same area and not worry about applications in other address spaces from changing the contents). when retrofit of virtual memory hardware to 370/165 ran into schedule problems ... several virtual memory features were dropped, including segment protect. This required other models to redo their virtual memory implementation to the 165 subset ... and any software written for the new features ... had to be reworked. For CMS shared segments, vm370 had to drop back to a real kludge playing tricks with storage protect keys.

I was doing some fancy stuff with paged mapped filesystem and shared segments on cp67 during the period that much of the rest of company was distracted with Future System effort ... some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

then I started converted them to vm370 base mentioned in this old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

a lot of this is also discussed in this recent posting
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing

I had taken some frequently used CMS applications and converted them to "shared executable" (eliminate using work storage internal to the executable image) and making the image location independent (the same shared segment can appear in different address spaces at different locations). This is where the adcon problems really showed up and took lots of hacking to eliminate them ... past posts mention adcon issues:
https://www.garlic.com/~lynn/submain.html#adcon

one of the "problems" was standard CMS kernel call convention was "SVC 202" with optional exception handling address constant immediately following the instruction. On return, the kernel would check if there was an exception handling address ... and if there was no exception, return to caller +4 (skipping over the address field) ... if there was an exception and there was exception address field, it would branch to the specified address (if there was an exception and no exception address, the kernel would abort the application and take default action). A frequent application convention would have "*+4" in the exception address field ... so exception & non-exception returned to same application location (and then check the return code for whether there was exception). These "addresses" permeated CMS code ... binding the executable image to fixed address location. I had to go thru all the application source and rework *ALL* such address use ... to elimiante binding executable image to specific location.

Standard executable images could be located in the paged-mapped filesystem ... loading effectively was just mapping the address space to the file in the paged-mapped filesystem ... w/o having to run thru the image fixing up all such address constants. misc. past posts mentioning paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

During FS ... most of 370 development was being killed off ... but with the demise of FS ... there was mad rush to get products back into the 370 hardware & software product pipelines.

This mad rush help motivate picking up part of the CMS shared segment stuff I had done for vm370 release 3 ... w/o the address independent support and/or the paged mapped filesystem. This was called "DCSS" ... but loaded at fixed, specified address (although the reworked CMS code had both the changes to run in R/O shared segment ... and the elimination of the address constants requiring executing at a fixed location). The DCSS convention had the executable image pre-loaded at some fixed address (with any necessary address constants appropriately swizzled) and then that portion of the address space "saved" in special vm370 paging area using a process defined in vm370 kernel module called DMKSNT ... recently mentioned here
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???
as well as here:
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing

--
virtualization experience starting Jan1968, online at home since Mar1970

America's Defense Meltdown

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Jan, 2011
Subject: America's Defense Meltdown
Blog: Boyd Strategy
re:
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown

Somewhat (small) Boyd story equivalent to heavy bomber comparison with Stuka&fighters was the air force missile used in Vietnam (examples of To Be leaders willing to sacrifice everything & everybody in support of their position and cherished beliefs).

he had reviewed the performance and test results before it went into production ... which showed it hitting the target every time. Boyd's response was that it would be lucky to hit 10% of the time (and then explained why that was in real world conditions).

In Vietnam, he turned out to be correct. Boyd then tells at one point, the air force general in Vietnam grounding all fighters until they were retrofitted with navy sidewinders (which would hit at least twice as often as air force missile).

the general lasted three months before being called on the carpet in the pentagon (and replaced). he was reducing air force budget share ... because fewer planes and pilots were being lost ... but the absolute worst offense that could possibly be committed was that he was increasing navy budget share (by using sidewinders).

Despite the criticism in the book, Boyd ranked Annapolis highest of all the academies (that he would give briefings at) ... with colorado springs being criticized for turning out accountants ... possibly reflecting the Vietnam pentagon attitudes.

with regard to wall street/financial venality totally dwarfing pentagon ... a small part of the recent economic bubble/mess ... were business people directing the risk managers to fiddle the inputs until they got the desired outputs (i.e. GIGO, garbage in - garbage out). The personal compensation for doing triple-A rated, toxic CDO transactions swamped any possible concern that the instruments could take down their institutions, the economy, and/or the country. some old articles:

How Wall Street Lied to Its Computers
http://bits.blogs.nytimes.com/2008/09/18/how-wall-streets-quants-lied-to-their-computer
Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/

wall street could fiddle numbers with the best of the generals (reference to the heavy bomber vis-a-vis stuka comparison).

misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks - adcons

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks - adcons
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2011 15:34:12 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Not strange. No computer in wide use today has the hardware necessary to do this. Linux does all the gyrations with the GOT and the PLT to get this to work. To do this right you need segmentation like Multics.

re:
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons

trivia ... some of the CTSS people went to Multics on the 5th flr of 545 tech sq ... and others went to the science center on the 4th flr of 545 tech sq (doing virtual machines, cp40, cp67, etc). Melinda's history devotes quite a bit to science center wanting the MIT follow-on to CTSS (i.e. project mac that went to ge/multics).
http://www.leeandmelindavarian.com/Melinda#VMHist

misc. past posts mentioning 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Today, the IETF Turns 25

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Today, the IETF Turns 25
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2011 15:40:43 -0500
Today, the IETF Turns 25
http://tech.slashdot.org/story/11/01/16/1727209/Today-the-IETF-Turns-25

for some drift ... my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

part of the commercialization of IETF (and ISOC) has been the increasingly restrictive copyright related to RFC publications.

--
virtualization experience starting Jan1968, online at home since Mar1970

subscripti ng

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: subscripti ng
Newsgroups: bit.listserv.ibm-main
Date: 16 Jan 2011 12:54:34 -0800
popular press has people using less than 5% of their brains. there has been extensive advances in knowledge of the brain over the last decade from MRI studies. recent book discussing structure in some detail (also available on kindle)
https://www.amazon.com/Iconoclast-Neuroscientist-Reveals-Think-Differently/dp/1422115011

a common theme in the book and various related papers on MRI studies is that as the brain grows and adapts, it attempts to optimize/minimize its energy (and oxygen) use ... apparently a survival characteristic (with the brain being one of the body's major energy user).

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks - adcons

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks - adcons
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2011 17:20:04 -0500
despen writes:
But it is strange. IBM adds new opcodes to handle frequently executed operations all the time. They added a whole bunch of opcodes to handle C string functions.

For adcons they just have to be able to add the same value to a whole bunch of 4 byte values quickly.

Right now they don't make any attempt to get the adcons adjacent but that's more of a compiler/binder issue.


re:
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#76 Speed of Old Hard Disks - adcons

one might claim that the POK favorite son operating system has done little or nothing to move away from their 60s real storage heritage (nearly 50 years gone) ... anymore than they've done little to move away from their 60s CKD DASD i/o resource vis-a-vis real storage heritage.

the 60s CKD DASD trade-off was scarce, limited real storage ... put file data structures on disk and use relatively plentiful i/o resources to search/manage the file data structures. as noted in another thread:
https://www.garlic.com/~lynn/2011.html#35 CKD DASD

... the trade-off had inverted in the 70s ... and the CKD DASD convention of burning i/o resources for search/manage of file data structures (on disk) was becoming an enormous thruput bottleneck.

Something of xover was 60s had application disk access libraries building channel programs (with "real" addresses) and using EXCP kernel call to execute the application build channel programs for file access.

In the POK favorite son operating system transition to virtual memory ... the convention of having channel programs being built in the application space (where channel programs are executed with real address), they started out borrowing the cp67 CCWTRANS routine that created a channel program copy ... substituting real address for the application space (virtual) addresses ... also discussed in post from another thread
https://www.garlic.com/~lynn/2011.html#35 CKD DASD

another trade-off from the 60s is the heavy use of pointer passing API ... resulting in all sorts of hacks over the years ... kernel having to occupy every application address space and "common segment" kludge used to pass parameters/data back & forth between applications in one address space and system services in different address spaces. recent common segment/CSA "kludge" post
https://www.garlic.com/~lynn/2011.html#45 CKD DASD

having pointers allowed to occur arbitrarily thru-out executable image and have loader swizzle/fix all of them as part of loading the executable image (before executation) is another simplification/trade-off from the 60s (that has never been corrected).

--
virtualization experience starting Jan1968, online at home since Mar1970

Chinese and Indian Entrepreneurs Are Eating America's Lunch

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Jan, 2011
Subject: Chinese and Indian Entrepreneurs Are Eating America's Lunch
Blog: Facebook
Chinese and Indian Entrepreneurs Are Eating America's Lunch
http://www.foreignpolicy.com/articles/2010/12/28/chinese_and_indian_entrepreneurs_are_eating_americas_lunch

was in HK in '91 (20 yrs ago) and there was local newspaper article about whether China or India were better positioned to do this.

During the recent crash there was report that the ratio of executive to worker compensation had exploded to 400:1 after having 20:1 for a long time and 10:1 in most of the rest of the world. Another comparison was the enormous wall street compensation and the executives at the leading banks in China (the pursuit of personal compensation overriding all other considerations). This is also somewhat Boyd's To Be or To Do choice.
https://www.garlic.com/~lynn/2000e.html#35
https://www.garlic.com/~lynn/2000e.html#36

other past Boyd related postings
https://www.garlic.com/~lynn/subboyd.html

Report from '90 census claimed that half of 18yr olds were functionally illiterate. Educational quality has continued to decline in the 20 yrs since.

NY comptroller had report that aggregate wallstreet bonuses spiked over 400% during the bubble (much appears to be from $27T in triple-A rated toxic CDO transactions) ... and there appears to be lots of activity to *NOT* return to pre-bubble levels. Also report that financial sector tripled in size (as percent of GDP) during bubble (also appears to be on the $27T in triple-A rated toxic CDOs).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

Possibly believing SEC wasn't doing anything, GAO started doing reports of public company fraudulent financial filing ... showing uptick even after SOX (supposedly to prevent such stuff). Explanation was fraudulent filings boosted executive bonuses ... which weren't returned, even if corrected financials were later refiled.

... and

Did China's economy overtake the U.S. in 2010?
http://blog.foreignpolicy.com/posts/2011/01/14/did_chinas_economy_overtake_the_us_in_2010

and recent thread with similar discussions:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#49 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?

--
virtualization experience starting Jan1968, online at home since Mar1970

shared code, was Speed of Old Hard Disks - adcons

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: shared code, was Speed of Old Hard Disks - adcons
Newsgroups: alt.folklore.computers
Date: Mon, 17 Jan 2011 11:30:15 -0500
despen writes:
Ah I see, thanks.

So they isolated all the RCONS and modified them for each address space sharing the module. But they had no hardware to assist with the relocation and still don't.


still enormous savings in paged-mapped environment ... avoided having to prefetch the executable image ... processing all the ADCONS ... which tended to be relatively randomly distributed through-out the executable ... and the modifications also resulted in having "changed" attributed on the related pages (and private to specific address space).

it was possible to turn some number of relocatable adcons into "absolute" adcons ... by expressing them as the difference between two relative adcons (and therefor not needing swizzle/adjust) ... where one of the addresses was known to be in some register. Then the processing becomes register to register copy and then an add operation ... in lieu of a load operation (slightly more additional processing but effectively negligible overhead on modern machines).

recent posts in this thread:
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#76 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#79 Speed of Old Hard Disks - adcons

--
virtualization experience starting Jan1968, online at home since Mar1970

Utility of find single set bit instruction?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Utility of find single set bit instruction?
Newsgroups: comp.arch
Date: Mon, 17 Jan 2011 12:11:21 -0500
timcaffrey@aol.com (Tim McCaffrey) writes:
Indeed, that is exactly what the CDC systems I worked with did, and yes they were used interactively (the Cyber 170/750 supported about 200 terminals without breaking a sweat). The terminals were "line-mode", in that it was your basic dumb terminal with a very smart front end on the system.

The CDC systems also supported linker overlays. Note that all this was possible on the original IBM PC (except for supervisor/protected mode). Nobody would claim that the IBM PC supported virtual memory.

- Tim


apl\360 was somewhat similar ... however the workspaces (being swapped) were typically 16k (or sometimes 32k) bytes.

porting apl\360 to cms (under cp67 virtual machine, demand-paged, virtual memory) for cms\apl ... exposed a major issue. CMS virtual memory enormously increased available workspace for apl (enough to enable some number of "real world" applications).

a problem was apl "garbage collection" strategy (from swapping paradigm). every apl assignment would allocate new storage ... storage allocation proceeding rapidly touch every available workspace storage location ... until it reached top of workspace; it then would do garbage collection, compacting allocated storage back to the start of the workspace. Going to demand paged workspace, that was possibly 100 times larger, resulted in enormous page thrashing problems ... requiring redoing APL's storage allocation methology for paged environment.

--
virtualization experience starting Jan1968, online at home since Mar1970

Today, the IETF Turns 25

From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Jan, 2011
Subject: Today, the IETF Turns 25
Blog: IETF
Today, the IETF Turns 25
http://tech.slashdot.org/story/11/01/16/1727209/Today-the-IETF-Turns-25

from above:
Little known to the general public, the Internet Engineering Task Force celebrates its 25th birthday on the 16th of January. DNSSEC, IDN, SIP, IPv6, HTTP, MPLS ... all acronyms that were codified at the IETF. But little known, one can argue the IETF does not exist; it just happens that people meet 3 times a year in some hotel around the world and are on mailing lists in between.

... snip ...

part of the commercialization of IETF (and ISOC) has been the increasingly restrictive copyright policies related to RFC publications.

basically earlier was that original authors are granting ISOC/IETF Trust ... "unlimited, perpetual, non-exclusive, royalty-free world-wide rights" ... the more recent changes are the original authors are granting much more limited rights to ISOC/IETF Trust (it isn't ISOC/IETF Trust restricting RFC use ... it is ISOC/IETF Trust no longer requiring original authors to even give such rights to ISOC/IETF Trust). That is front-loading the copyright restrictions (rather than back loading) ... in which case it might be interpreted as encouraging proprietary protocols to be published as internet RFC (as opposed to preventing incorporating open standards in proprietary protocols).

RFC 2026 October 1996
l. Some works (e.g. works of the U.S. Government) are not subject to copyright. However, to the extent that the submission is or may be subject to copyright, the contributor, the organization he represents (if any) and the owners of any proprietary rights in the contribution, grant an unlimited perpetual, non-exclusive, royalty-free, world-wide right and license to the ISOC and the IETF under any copyrights in the contribution. This license includes the right to copy, publish and distribute the contribution in any way, and to prepare derivative works that are based on or incorporate all or part of the contribution, the license to such derivative works to be of the same scope as the license of the original contribution.

... snip ...

and
Copyright (C) The Internet Society (1996). All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.


... snip ...

and later BCP78 / RFC3978, March 2005 (with original authors retaining more rights)
The IETF policies about rights in Contributions to the IETF are designed to ensure that such Contributions can be made available to the IETF and Internet communities while permitting the authors to retain as many rights as possible. This memo details the IETF policies on rights in Contributions to the IETF. It also describes the objectives that the policies are designed to meet. This memo updates RFC 2026, and, with RFC 3979, replaces Section 10 of RFC 2026.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

The Imaginot Line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Jan, 2011
Subject: The Imaginot Line
Blog: Facebook
The Imaginot Line
http://www.foreignpolicy.com/articles/2011/1/02/the_imaginot_line

from above:
Like military historians shaking their heads over the hubris of the Maginot Line, future historians of economic thought will make the regulatory structures that failed us in the financial crisis of 2008 and its aftermath seem like follies. They might perhaps, like tourists of the battlefields, marvel at the sturdy fortifications that were erected to guard against the dangers that had overwhelmed us in previous crisis and admire the single-mindedness with which we were determined to avoid repeating the senseless casualties inflicted on us in the Great Depression.

... snip ...

The article side-stepped repeal of Glass-Steagall which had kept the risky, unregulated activities of investment bankers separate from the safe, regulated depository institutions. End of 2008, the four largest too-big-to-fail financial institutions (another moral hazard) had $5.2T in triple-A toxic CDOs being carried off-balance (courtesy of repeal of Glass-Steagall and their investment banking arms). At the time, triple-A rated toxic CDOs were going for 22 cents on the dollar; The institutions should have been required to bring them back on to the balance sheet, however they would have then been declared insolvent and liquidated. Recent disclosed Federal Reserve activity is that it has been buying huge amounts of those toxic assets at 98 cents on the dollar.

part of long-winded "fraud in gov" discussion in (linkedin) Financial Fraud group ... references that with the gov. leaning over backwards keeping the too-big-to-fail institutions afloat ... there wasn't much they could do when they were found to be laundering money for drug cartels
https://www.garlic.com/~lynn/2011.html#50

part of long-winded discussion about audit houses being sued for fraud related to financial mess (in linkedin Financial Fraud group) which covers some details about the deregulation that went on a decade ago
https://www.garlic.com/~lynn/2010q.html#29

disclaimer: spring of 2009, I was asked to WEB/HTML the digital scans of the (30s) Pecora Hearings (resulted in Glass-Steagall) with heavy cross-indexing and correspondence between what went on in the 20s and what went on this time. Apparently there was some expectation that the new congress had appetite to take some action. After doing lots of work, I got a call saying it wasn't needed after all.

heavy regulation is the opposite of the last decade of lax/de- regulation and financial mess.

the heavy regulation has been cited as playing a role in the S&L crisis ... eliminating any requirement for skill or qualifications for S&L executives (antithesis of Boyd OODA-loops) ... making them sitting ducks for the investment bankers that swooped in to skim off the extra money when gov cut S&L reserve requirements from 8 to 4 percent. There have been some comments that many of the same investment bankers also played significant roles in the internet IPO bubble and the recent financial mess.

--
virtualization experience starting Jan1968, online at home since Mar1970

Two terrific writers .. are going to write a book

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Jan, 2011
Subject: Two terrific writers .. are going to write a book
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011.html#64 Two terrific writers .. are going to write a book

for the fun of it ... some past posts from last year about 701 was "defense calculator" and only 19 sold ... while 650 sold 2000:
https://www.garlic.com/~lynn/2009c.html#35 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009d.html#39 1401's in high schools?
https://www.garlic.com/~lynn/2009h.html#12 IBM Mainframe: 50 Years of Big Iron Innovation
https://www.garlic.com/~lynn/2009j.html#39 A Complete History Of Mainframe Computing

as an side ... I've scanned the old SHARE "LSRAD" report (Large Systems Requirements) from 1979 ... discussed in this post
https://www.garlic.com/~lynn/2009.html#47

it is copyright 1979 ... after the copyright law was extended (or otherwise it would be freely available). I've talked to SHARE a couple times about getting permission to put it in the SHARE part of bitsavers
http://www.bitsavers.org/pdf/ibm/share/

the ibm part of bitsavers has a treasure trove of "old" documents:
http://www.bitsavers.org/pdf/ibm/

for the fun of it, folklore mentioning back&forth between the IMS and System/R group in the 70s

basically the IMS group claiming that RDBMS took twice the physical disk space and significant more I/O compared to direct record pointers that were exposed as part of IMS semantics. System/R pointing out that implicit indexes (accounting for the doubling in disk space and significant increase in disk i/os) significantly reduced people overhead (administrative re-org and more natural semantics for applications "thinking" about data).

Going into the 80s ... there was significant reducion in disk price/mbyte ... mitigating the disk space required for RDBMS indexes, significant increase in system main memory allowing caching indexes (reducing disk i/os to access record) and increases in people time costs ... starting to shift some of the trade-offs from IMS to RDBMS. misc. past posts mentioning original SQL/relational
https://www.garlic.com/~lynn/submain.html#systemr

old reference (including old email) to Jim off-loading some stuff on me when he was leaving for Tandem ... including DBMS consulting with IMS group:
https://www.garlic.com/~lynn/2007.html#1

recent reference to doing a favor for the IMS group (about the same time as Jim leaving for Tandem, unrelated to DBMS consulting) ... when 300 were moved to off-site building and faced with having to settle for remote 3270 support (doing channel extender so they could have local channel-attached 3270 back to the STL datacenter):
https://www.garlic.com/~lynn/2010o.html#55

also did something similar for the IMS field support group in Boulder when they were also moved off-site.

reference to Jim being "father" of modern financial dataprocessing ... formalizing transaction semantics that improved auditors confidence in computer records.
https://www.garlic.com/~lynn/2008p.html#27

IMS wiki
https://en.wikipedia.org/wiki/Information_Management_System

The above references IBM (technical) article on History Of IMS ... but has gone 404 since IBM moved the (previously free) articles to IEEE library.

And Vern Watts
http://vcwatts.org/ibm_story.html
RIP
http://vcwatts.org/

with regard to HASP, Simpson along with Crabtree (and couple others)

Simpson went to Harrison and did a project called RASP ... basically MFT-like system ... but actually fixing many paged-mapped issues that have yet to be fixed in current OS/360 derivatives. He then left and became an Amdahl fellow in Dallas and was recreating RASP from scratch (even tho IBM wasn't going to do anything with RASP ... there was legal action to make sure that no code had actually been directly copied).

Crabtree went to the JES group in g'burg. My wife did a stint there ... including catcher for ASP->JES3 and co-author of JESUS (JES Unified System) specification that merged all the features of JES2 and JES3 that customers couldn't live w/o; this was before she was con'ed into going to POK to be in charge of loosely-coupled architecture.

wiki mentions both Simpson and Crabtree ... but is really sparse on other stuff
https://en.wikipedia.org/wiki/Houston_Automatic_Spooling_Priority

In the 60s, as undergraduate at the univ. ... I did a lot of software changes to HASP ... including adding 2741 & TTY terminal support and implementing a conversational editor (for my own flavor of CRJE).

misc. past posts mentioning HASP &/or JES
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Utility of find single set bit instruction?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Utility of find single set bit instruction?
Newsgroups: comp.arch
Date: Tue, 18 Jan 2011 11:24:16 -0500
Robert Myers <rbmyersusa@gmail.com> writes:
It is? I thought demand paging was just a particular way of implementing virtual memory. I sure don't want to argue about it.

In any case, whatever you call these schemes, they are all, as far as I am concened, left over from a time when hardware looked so different that it's amazing we are still using them.

Is there any circumstance left (beyond the most elementary level of programming instruction) in which it is still useful to imagine that memory is this big, undifferentiated space that may be larger than (some layer of) physical memory?


demand paging is using virtual memory to provide the appearance of larger address space than actual physical memory.

however, there have been some implementations that just use virtual memory to re-organize physical memory.

in the 60s, 360/67 was modification to standard 360/65 with the addition of virtual memory hardware. 360/67 was originally for demand page, virtual memory, timesharing system ... tss/360. tss/360 ran into some number of problems ... so various places did their own demand page systems ... science center did cp/67 virtual machine system, UnivOfMichigan did MTS, Stanford did Orvyl ... etc.

the standard batch system for most of the 360s was os/360 ... which was pure real storage system. boeing huntsville had two processor 360/67 ... but was running it in non-virtual memory mode for standard os/360 used for long running 2250 display graphics applications ... and os/360 had significant problem with storage fragmentation with long running applications. the boeing people hacked os/360 to run it in a single virtual address space ... the same size as the real physical storage ... with virtual memory being used to re-arrange storage addresses as countermeasure to the severe storage fragmentation problem (and no demand paging).

multiple virtual address spaces (in aggregate larger than physical) has been used for partitioning and isolation ... some cases administrative issues like virtual machine server consolidation ... or ease of deployment ... have fixed virtual machine configuration that masks several of the underlying hardware characteristics. There are ease-of-use versus performance trade-offs here ... this is somewhat analogous to disk hardware that offers a flat linear record space that masks actual underlying rotational and arm position mechanics (some past disk architectures exposed such details for additional level of optimization).

In other cases, security. One of the countermeasures to lack of internet security is to dynamically create a virtual machine (in its own virtual address space) on the fly ... dedicated to internet browser operation ... which then dissolves at the end of the session (along with any compromises).

then there is this from long ago and far away
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

--
virtualization experience starting Jan1968, online at home since Mar1970

Date representations: Y2k revisited

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Date representations: Y2k revisited
Newsgroups: bit.listserv.ibm-main
Date: 18 Jan 2011 08:47:01 -0800
frank.swarbrick@EFIRSTBANK.COM (Frank Swarbrick) writes:
Or COBOL! Or Pascal.

there are large number of characteristics of C language that result in programmers tending to shoot themselves in the foot ... that are not there in other languages.

the original mainframe tcp/ip implementation had been done in vs/pascal ... and had none of the buffer-length related exploits (related to buffers and buffer indexing) that have been prevelent in C-language based implementations (it is almost as hard to shoot yourself in the foot with pascal as it is to not shoot yourself with C).

of course there were other issues in that original vs/pascal tcp/ip implementation ... getting something like 44kbytes/sec aggregate using loads of processor. I did do the rfc1044 enhancements ... and in some tuning tests at cray research ... got channel speed (1mbyte/sec) thruput on 4341 using only small amount of processor (something like 500 times reduction in instructions executed per byte transferred). misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

digitize old hardcopy manuals

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: digitize old hardcopy manuals
Newsgroups: bit.listserv.ibm-main
Date: 18 Jan 2011 12:02:52 -0800
afg0510@VIDEOTRON.CA (Andreas F. Geissbuehler) writes:
I believe you once posted / answered a post about digitizing old hardcopy manuals. I have some 30 volumes 1980..1995 vintage, about 8'000 B&W pages to convert to PDF.

bitsavers is somewhat ad-hoc ... although they do have software that turns scanning resulting in one file/page into multi-page file ... aka reference to having used it for old SHARE LSRAD report (done with multi-function (scanner, fax, printer):
https://www.garlic.com/~lynn/2009.html#47

Bitsaver with discussion of some scanners & software:
http://www.bitsavers.org/

IBM PDF section on bitsavers
http://www.bitsavers.org/pdf/ibm/

wayback machine (archive.org) also does a lot of scanning
http://www.archive.org/details/partnerdocs

past post mentioning spring 2009 being asked to do something with the scan of Percora Hearings (30s congressional hearings into the crash and depression) online at archive.org (and scanned at Boston Public Library the previous fall), also trying to improve on the OCR
https://www.garlic.com/~lynn/2009b.html#58 OCR scans of old documents

using google's tesseract:
http://code.google.com/p/tesseract-ocr/

wiki page ... mentioning large scale scanning programs at project gutenberg, google book search, open content alliance internet archive, amazon
https://en.wikipedia.org/wiki/Book_scanning

DIY book scanner (from wiki article)
http://hardware.slashdot.org/story/09/12/13/1747201/The-DIY-Book-Scanner
http://www.wired.com/gadgetlab/2009/12/diy-book-scanner/

off-the-wall article:
http://www.labnol.org/internet/favorites/google-book-scanning-facilities-found/393/

--
virtualization experience starting Jan1968, online at home since Mar1970

Make the mainframe work environment fun and intuitive

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Jan, 2011
Subject: Make the mainframe work environment fun and intuitive
Blog: MainframeZone
The Dec1979 SHARE LSRAD report somewhat said the same thing.

As to more fun ... spring of '78, I was obtain a copy of the ADVENTURE source and made it available on the internal network (larger than the arpanet/internet from just about the start until possibly late '85 or early '86), much to the dismay of many.

we had a problem with our online collection of demo programs when the corporate auditors came through. We finally got the guidelines changed so rather than online restricted to "official business use only" ... to "management approved use only" (allowing use of demo programs). Lots of people came away with a completely different view of online user interfaces & human factors ... after having been exposed to adventure.

However, the auditors apparently felt picked on after having been thwarted regarding the elimination of all demo programs.

In the evenings, they would do sweeps of the building looking for classified information left out and unsecured ... including the various departmental 6670 (laser printers) at locations around the building. We had modified the 6670 driver to print randomly selected quotations on the output separator page. At one 6670, they found one of the separator page with (random selected) definition of auditors. They complained the next day that we were attempting to malign them.

[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


--
virtualization experience starting Jan1968, online at home since Mar1970

Two terrific writers .. are going to write a book

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 19 Jan, 2011
Subject: Two terrific writers .. are going to write a book
Blog: MainframeZone
Shmuel Metz ... long experience as IBM customer and very active on ibm-main mailing list.

Last year, Shmuel asked be to canvas IBMers regarding various mainframe historical details ... as part of Shmuel cleaning up various ibm mainframe wiki entries.

One lengthy response was from Ed Lassettre, including some amount of detail of building AOS (prototype for VS2/SVS).

I had recollections of being in 705 machine room late at night when Don Ludlow was building AOS and testing on 360/67 (before virtual memory on 370s was available). There were minimal changes needed for MVT to layout a single 16mbyte virtual address space. The biggest change was to EXCP to make copy of the passed channel programs, substituting real addresses for virtual (since the channel programs passed from EXCP caller all had virtual addresses). For this, Don had borrowed "CCWTRANS" routine from CP67.

recent posts in thread:
https://www.garlic.com/~lynn/2011.html#64 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2011 14:19:33 -0500
Walter Banks <walter@bytecraft.com> writes:
The best countermeasure that I saw was IBM on a Model 360-91 owned by the university I worked for. After a scheduled maintenance all of the non IBM add on memory stopped working. Seems a ECO reversed the sense of data lines to the memory at both processor and memory ends. Non IBM memory now returned data inverted.

as undergraduate in the 60s, I had added tty/ascii terminal support to cp67. the code already supported 2741 & 1052 and had hack to dynamically determine type of terminal at end-of-line. adding tty/ascii I experimented until it did the same thing. Part of it involved 2702 being able to switch the terminal type specific "line scanner" for each port address. For short time, I thot it would enable also using single dial-up number (on "hunt group" pool of numbers) for all terminals ... which met any terminal type might be connected to any (dial-up) port. It almost worked except for the fact that 2702 had taken short cut and hard-wired the line-speed oscillator for each port (so while it was possible to dynamically change the terminal-type line scaner for each port ... it wasn't actually possible to change a port's line speed).

this somewhat prompted the univ. to do a clone controller effort ... starting with interdata/3, reverse engineering the mainframe channel interface ... and building a channel interface board for the interdata/3. the interdata/3 was then programmed to emulate 2702 ... with the addition that it supported dynamic line-speed recognition.

on of the first test involved tty/ascii terminal ... and the data that came into mainframe memory was totally garbage (both raw data and the result of ascii->ebcdic translate operation). Turns out that overlooked that the 2702 convention was pulling off leading bit and putting it into the low order bit of byte ... proceeding into the last bit went into the high-order bit of byte (basically reversing order of bits between what they are on the line and how they appeared in each byte). The interdata/3 was pulling off the leading bit from the line and putting it in the higher-order bit position ... so the order in the byte was the same as it appeared on the line. the fix was to have the interdata/3 convert to "bit-reversed bytes" convention of the 2702.

this had implication for mainframe processing ... because later file upload/download (communication with PCs thru channel interfaces) ... wouldn't have the "bit-reversed bytes" convention.

past posts mentioning clone processor
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

HELP: I need a Printer Terminal!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HELP: I need a Printer Terminal!
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2011 22:11:52 -0500
greenaum@yahoo.co.uk (greenaum) writes:
I'd just as soon use an old PC (one with a serial port) or a new PC with a USB serial port. Terminal emulator programs are really advanced, and can happily sit there for ages dumping input to a file. Then like the guy said, laser print it when you need to.

when i was doing hsdt backbone ... had all sorts of equipment that rs232 output to "printer" ...

started with ibm/xt with two ports for fireberd (bit error tester) and ultramux (split T1 link into 56kbit channel for continuously running bit error tester and the rest for data) ... and wrote turbo pascal program to collect the input

a turbo pascal program to collect all the input. i wanted it tagged so it had merged view of events from various boxes by time ... as well as be able to separate it into different threads by box/function. past references with header section from program (m232.pas)
https://www.garlic.com/~lynn/2003g.html#35 Mozzilla renamed to Firebird

also did some analysis/re-org of the bit-error information

other references:
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call
https://www.garlic.com/~lynn/2003g.html#36 Mozzilla renamed to Firebird
https://www.garlic.com/~lynn/2008l.html#17 IBM-MAIN longevity

--
virtualization experience starting Jan1968, online at home since Mar1970

America's Defense Meltdown

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 20 Jan, 2011
Subject: America's Defense Meltdown
Blog: Boyd Strategy
re:
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#75 America's Defense Meltdown

... some number of rebroadcasts tuesday of Eisenhower's goodby speech (50yrs ago) warning about the military-industrial complex. as referenced, it has been eclipsed by FIRE (financial, insurance, real estate) lobby.

Steele had reference to recent/related Spinney article ...
http://www.phibetaiota.net/2011/01/reference-domestic-roots-of-perpetual-war/

--
virtualization experience starting Jan1968, online at home since Mar1970

The Curly Factor -- Prologue

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 20 Jan, 2011
Subject: The Curly Factor -- Prologue
Blog: Boyd Strategy
One of Boyd's stories about US entry into WW2 was that it needed to deploy an enormous of untrained soldiers (aka they had little idea what they were doing), as a result, the Army had to do a rigid, top-down, command&control infrastructure in order to leverage the few skilled resources it did have.

There is an analogy in the S&L crisis ... that the S&Ls had a very rigid regulatory structure and as a result it required no skill to be an S&L executive (the industry descriptions are quite a bit more colorful and derogatory), just had to be capable of following all the regulations. The result was that they were sitting ducks (antithesis of OODA-loop) when the S&L reserve requirements were reduced from 8 to 4percent ... and the investment bankers swooped in to skim off all the money.

There is a separate issue with respect to risk management and the S&L crisis. In 1989 (near the end of S&L crisis), there was analysis that only very minor fluctuations would be required to have Citibank ARM (aka adjustable rate) mortgage portfolio take down the institution. As a result, the institution unloaded its total mortgage portfolio and got out of the business ... with the institution requiring a (private) bailout to stay operating. Old long-winded post from a person that was there (post from 1999)
https://www.garlic.com/~lynn/aepay3.htm#riskm

Role forward just a couple yrs from the above post ... and citibank is one of the biggest players in triple-A rated (adjustable rate mortgage-backed securities) toxic CDOs. By the end of 2008, of the four too-big-to-fail institutions collectively holding $5.2T in triple-A rated toxic CDOs, citi is holding the most:
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Ignoring all the risk and obfuscation from packaging the ARM mortgages as toxic CDOs ... underneath they are still ARM mortgages and it was barely a decade that citi had institutional knowledge that ARM mortgage portfolio could easily take down the institution. Less than ten years of (ARM mortgage backed) toxic CDO frenzy and citi needs another enormous bailout because of (fundamentally) its dealings in ARM mortgages (again; this time from the gov).

A periodic comment raised with regard to security software ... enormous complexity frequently is related to snake oil (obfuscation and misdirection).

And in design of resilient systems, dealing with uncertainty frequently involves adding some amount of redundancy. An all too common recent characteristic is public company executives eliminating redundancy (possibly also deferring preventive maintenance and fiddling input to risk models) and pocketing the "savings". Frequently they are planning on being gone in a year or two ... and they are cutting corners that are in place for dealing with operations that span multiple years. If something goes wrong before they've bailed ... there has been pattern of using stupidity and complexity as an excuse. In the toxic CDO scenario ... there may be resort to descriptions of the horribly complex ways that ARM mortgages are twisted and contorted ... however after all is said and done ... they are still ARM mortgages (aka smoke, mirrors, obfuscation and misdirection).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Fri, 21 Jan 2011 14:27:22 -0500
Peter Grange <peter@plgrange.demon.co.uk> writes:
Which reminds me of the union leader back in the bad old days:- "The management have offered 5%. 5% of nothing is nothing. We want 10%".

or the executive that was loosing $5 on every sale and said he would make it by volume.

--
virtualization experience starting Jan1968, online at home since Mar1970

History of copy on write

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of copy on write
Newsgroups: comp.arch
Date: Fri, 21 Jan 2011 16:36:17 -0500
John Levine <johnl@iecc.com> writes:
For a project I've been doing, I've been trying to trace the history of copy-on-write in operating systems. It's ubiquitous now, but it took surprisingly long for a lot of systems to make their addressing hardware support it. Most notably. the IBM 360/67 had virtual memory in 1967, but IBM mainframes couldn't do copy on write until the mid 1990s.

I put it on my blog so I can update it as I fill in the gaps. Comments welcome.

http://obvious.services.net/2011/01/history-of-copy-on-write-memory.html


I'm pretty sure that unix under vm370 projects in the early to mid 80s were supporting copy-on-write. It is also possible that the tss/370 SSUP layer for AT&T unix in the early to mid 80s also supported copy-on-write.

... a different kind of copy-on-write (from mid-70s):

also ... original 370 virtual memory architecture had provisions for segment protect. when retrofitting virtual memory hardware to 370/165 ran into schedule problems ... several features were dropped from 370 virtual memory ... including segment protect. other 370 models and any software had to drop back to working w/o the dropped features.

vm370 was implementing cms protected "shared" segments using the new "segment protect" facility ... but when the feature was dropped ... they had to fall back to a really funky/ugly hack using "storage protect keys" (had to fiddle the virtual machine's psw and virtual storage protect keys ... to something different than the virtual machine had set).

in release 2, there was virtual machine assist microcode for 370/158 and 370/168 ... which had hardware support for some privileged instructions to be directly executed in virtual machine mode. this included lpsw and ssk instructions. however this was incompatible with the storage key hack for shared segments ... so VMA couldn't be activated for CMS users (with shared segments).

A hack was done for vm370 release 3 ... that allowed virtual machines to be run w/o the storage protect hack (and with VMA active). Cross-user integrity was preserved by having the system scan all shared pages (whenever switching to different users) and if any were found to be changed ... the previously running user had the changed paged made private ... and the remaining users then refreshed an unchanged page from disk.

The trade-off was that the benefit of running with VMA turned on (and not performing the storage protect key hacks) ... more than offset the changed page scanning overhead. At the time that the decision was made ... typical CMS user ran with only 16 shared pages. However, also part of vm370 release 3 ... was significant increase in the number of shared pages that typical CMS user ran with (inverting the trade-off measures) ... aka the scanning/vma decision was made in isolation from changing vm370 release 3 to have a lot more shared pages.

By the time the issue escalated ... it was claimed to be too late. Some number of (cms intensive) customers had been advised of the upcoming change to support VMA for CMS users ... and they had already purchased the VMA hardware upgrade for 370/168 (at substantial price). Nobody was willing to tell those customers that 1) that they shouldn't actually run CMS shared segment users with vma turned on and/or 2) shared segment CMS wouldn't actually ship with VMA support (since it was no longer a performance benefit).

This was also a pure single-processor environment ... when multiprocessor support was shipped ... it was now necessary to have a unique copy of all shared pages for every processor (so more than one user would be running concurrently with the same shared pages ... where either could corrupt the environment of the other). Now in addition to having to scan the ever increasing number of shared pages (before switching to a different user) ... the next user to be dispatched had to have its tables scanned so that they were pointing to the set that were specific to the processor that the user would be executing on.

the increase in number of (vm370 release 3) shared pages ... came from a small subset of changes that I had converted from cp67 to vm370 ... some old email:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

some recent posts mentioning storage protect key hack &/or segment protect being dropped from 370 virtual memory architecture (because of 370/165 hardware issues):
https://www.garlic.com/~lynn/2011.html#44 CKD DASD
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons

--
virtualization experience starting Jan1968, online at home since Mar1970

History of copy on write

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of copy on write
Newsgroups: comp.arch
Date: Fri, 21 Jan 2011 20:30:37 -0500
EricP <ThatWouldBeTelling@thevillage.com> writes:
We ported some software from VAX/VMS to VM/CMS about 1984 that used shared memory to communicate between processes. The problem was getting VM to _NOT_ COW the writable shared segment pages. I was told by the person who discovered the appropriate hack that it came down to flipping a bit in the PTE.

re:
https://www.garlic.com/~lynn/2011.html#96 History of copy on write

original relational/sql was done in the 70s on vm370 (370/145) in bldg. 28. Part of the internal extentions to vm370 for system/r was "DWSS" ... dynamic writeable share segments (communication between multiple processes). In the early 80s, for awhile DWSS was part of the tech transfer from bldg. 28 to Endicott for SQL/DS ... but for what ever reason, Endicott eventually decided to not use it.

misc. past posts mentioning system/r
https://www.garlic.com/~lynn/submain.html#systemr

misc. past posts mentioning DWSS:
https://www.garlic.com/~lynn/2000.html#18 Computer of the century
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2004f.html#23 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006t.html#39 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006w.html#11 long ago and far away, vm370 from early/mid 70s
https://www.garlic.com/~lynn/2006y.html#26 moving on
https://www.garlic.com/~lynn/2007f.html#14 more shared segment archeology
https://www.garlic.com/~lynn/2009q.html#19 Mainframe running 1,500 Linux servers?

--
virtualization experience starting Jan1968, online at home since Mar1970

History of copy on write

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of copy on write
Newsgroups: comp.arch
Date: Fri, 21 Jan 2011 22:17:55 -0500
John Levine <johnl@iecc.com> writes:
Not to cavil or anything, but the ESA/390 Principles of Operation (of which I have a quaint paper copy) is quite clear that up through ESA/370 a program couldn't restart after a fault due to writing a read-only page. So I believe it did something, but unless the manuals are lying 1984 IBM mainframes didn't have the hardware ability to do COW. COT sure, but not COW.

re:
https://www.garlic.com/~lynn/2011.html#96 History of copy on write
https://www.garlic.com/~lynn/2011.html#97 History of copy on write

until they put "page protect" into virtual memory architecture ... after having dropped "segment protect" from original 370 virtual memory (before even shipping) ... vm370 was doing protect with storage keys (and then vm370 release 3 mechanism effectively did copy after write ... anybody could write ... but if they did, they were given private copy, and unchanged public copy refreshed from disk).)

(360/)370 storage key protect says "store protect" suppresses the instruction.

online esa/390
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

states some flavor of "segment protect" eventually shipped in 370 ... but was dropped for 370/xa and replaced with "page protect"
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/F.1.4?SHELF=EZ2HW125&DT=19970613131822&CASE=

(new) esa/390 "suppression on protect" (for page protect) is useful for aix/esa copy-on-write
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.4.5?SHELF=EZ2HW125&DT=19970613131822

Melinda's history mentions 370 finally has segment protect being introduced in 1982 (hardly a year before 370/xa avail in early 83; which replaced segment protect with page protect) ... melinda's pages moving:
http://www.leeandmelindavarian.com/Melinda#VMHist

a decade after 370 virtual memory announce
https://en.wikipedia.org/wiki/IBM_System/370

... with the corporation continued efforts to kill off vm370 .... there was somewhat less general "clean" architecture and more & more tailored around POK's favorite son operating system.

--
virtualization experience starting Jan1968, online at home since Mar1970




previous, next, index - home