List of Archived Posts

2011 Newsgroup Postings (03/13 - 04/05)

Mainframe technology in 2011 and beyond; who is going to run these Mainframes?
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
IBM100 - Rise of the Internet
I actually miss working at IBM
Multiple Virtual Memory
I actually miss working at IBM
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
End of an era
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Multiple Virtual Memory
End of an era
Multiple Virtual Memory
The first personal computer (PC)
The first personal computer (PC)
Multiple Virtual Memory
Multiple Virtual Memory
Multiple Virtual Memory
Intepreted Languages
Multiple Virtual Memory
vm370 running in "XA-mode"
"Social Security Confronts IT Obsolescence"
SNA/VTAM Misinformation
The real cost of outsourcing (and offshoring)
SNA/VTAM Misinformation
junking CKD; was "Social Security Confronts IT Obsolescence"
On Protectionism
junking CKD; was "Social Security Confronts IT Obsolescence"
On Protectionism
Back to architecture: Analyzing NYSE data
On Protectionism
On Protectionism
Multiple Virtual Memory
junking CKD; was "Social Security Confronts IT Obsolescence"
junking CKD; was "Social Security Confronts IT Obsolescence"
junking CKD; was "Social Security Confronts IT Obsolescence"
junking CKD; was "Social Security Confronts IT Obsolescence"
junking CKD; was "Social Security Confronts IT Obsolescence"
On Protectionism
On Protectionism
junking CKD
On Protectionism
IBM100 - Rise of the Internet
You almost NEVER see these for sale, own a 360 console
Downloading PoOps?
junking CKD; was "Social Security Confronts IT Obsolescence"
In your opinon, what is the highest risk of financial fraud for a corporation ?
SNA/VTAM Misinformation
Collection of APL documents
The first personal computer (PC)
In your opinon, what is the highest risk of financial fraud for a corporation ?
End of an era
3090 ... announce 12Feb85
Collection of APL documents
End of an era
End of an era
|What is the maximum clock rate given the state of today's technology?
Other early NSFNET backbone
Other early NSFNET backbone
The first personal computer (PC)
Other early NSFNET backbone
Fraudulent certificates issued for major websites
Collection of APL documents
History--Early Bell System teletypes
The first personal computer (PC)
I'd forgotten what a 2305 looked like
Internet pioneer Paul Baran
Internet pioneer Paul Baran
Internet pioneer Paul Baran
I'd forgotten what a 2305 looked like
Which building at Berkeley?
Collection of APL documents
What is your most memorable Mainframe security bug, breach or lesson learned?
History of APL -- Software Preservation Group
New job for mainframes: Cloud platform
The first personal computer (PC)
The first personal computer (PC)
Scientists use maths to predict 'the end of religion' - Repost
Would mainframe technology be relevant in the age of cloud computing?
Mainframe passwords synced to active directory
PDCA vs. OODA
Mainframe Fresher
PDCA vs. OODA
Itanium at ISSCC
coax (3174) throughput
VM IS DEAD

Mainframe technology in 2011 and beyond; who is going to run these Mainframes?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Mar, 2011
Subject: Mainframe technology in 2011 and beyond; who is going to run these Mainframes?
Blog: Mainframe Zone
re:
https://www.garlic.com/~lynn/2011d.html#79 Mainframe technology in 2011 and beyond; who is going to run these Mainframes?

for the fun of it ... form 2yrs ago (army's version of predator has better record with landings being done on autopilot than air force does with manual landings)

USAF officers slammed for pranging Predators on manual; 'Xbox flyer' sergeants + autopilots do better
http://www.theregister.co.uk/2009/04/29/young_usaf_predator_pilot_officer_slam/

my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture. While there she did "Peer-Coupled shared data" architecture, which except for IMS hot-standby saw very little uptake (until sysplex & parallel sysplex) ... contributing to her not remaining long in the position. misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata

A few years ago we were periodically visiting one of the largest financial transaction operations and they claimed their 100% available for extended number of years was attributed to:

• ims hotstandby (had triple-replicated operation with geographic separation)
automated operator

as other kinds of failures have been addressed ... it left major environmental outages and human mistakes.

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sun, 13 Mar 2011 14:25:28 -0400
Mike Hore <mike_horeREM@OVE.invalid.aapt.net.au> writes:
Ummm, I think it goes a loooong way further back - IBM had a tradition of avoiding words that made computers sound in any way human. I just checked the 701 manual (from Bitsavers), dated 1953, and they refer to "electrostatic storage". Likewise the 704 manual talks about "core storage". Memory has always been "storage" in IBM-speak.

re:
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory

the other issue contributing to the ambiguity in that period was rotational drums as computer memory ("storage") ... as well as craft/art to carefully place instructions on the drum surface to maximize instructions per revolution:
http://www.columbia.edu/cu/computinghistory/650.html
https://en.wikipedia.org/wiki/IBM_650

and 701
https://en.wikipedia.org/wiki/IBM_701

the scarcity of electronic storage/memory contributed to the CKD storage architecture for 360 ... misc. past posts
https://www.garlic.com/~lynn/submain.html#dasd

360 CKD had i/o programs and arguments all in processor memory ... which were sequentially fetched (in some cases repeatedly) ... requiring dedicated I/O resources during i/o operations. the paradigm allowed file&library directories resident on disks and i/o programs that searched the disk resident directories for specific file/members (the i/o "search" operation would scan the disk resident directory entries, for each entry, it would fetch the match argument from processor memory/storage for comparison ... repeating the process for each directory entry until match was found). this traded off relatively abundant i/o resources for extremely scarce electronic memory/storage.

however, I've pontificated frequently that by at least the mid-70s, the trade-off was starting to invert ... with the dedicated i/o resources for CKD (& search) i/o programming was becoming a major system bottleneck. In a later example, I claimed that between 360/67 and 3081 time-frame that the relative disk system thruput had declined by order of magnitude (processor & memory resources increased by 40-50 time while disk thruput increased by only 3-5 times) ... and needed to change paradigms ... increasingly using electronic memory to compensate for disk bottleneck ... old post with reference:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executives took offense at my claims and assigned their performance group to refute them. a few weeks later they came back and effectively said that I had slightly understated the sitution. the analysis then turns into (user group) SHARE presentation on organizing disks for improved thruput (B874 at SHARE 64, 18Aug84)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on anti

As undergraduate I watched as IBMers did os/360 sysgens up thru release 9.5. The univ. turned it over to me starting with release 11 ... but rather than do a straight sysgen, I started doing highly modified operations ... re-organizing all the copy/move statements from the distribution material to the new system disks ... so that the system compenents were carefully ordered on disk to optimize arm motion during system operation (achieving nearly three times increase in thruput for typical univ. workload).

In addition to doing a lot of pathlength operations and new feature for cp67 ... I also replaced the disk & drum FIFO operation with ordered arm scheduling and multiple request chaining. CP67 FIFO paging on 2301 paging would peak around 80 pages/sec. With ordered multiple requests, I could get nearly 300 pages/sec. Similar changing disk operation from FIFO nearly doubled typical thruput ... along with much more graceful degradation and workload increased (advantage of ordered seek queuing tended to increase as the length of queue increased). Some of this shows up in old SHARE presentation that I made in '68
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

In the early 70s at the science center, I did a paged mapped flavor for CMS filesystem ... trying to avoid the shortcomings that I saw in tss/360 single-level-store implementation (and considered what I was doing was better than what was being formulated for Future System). While lots of the stuff leaked out in product releases ... the paged mapped stuff never did ... old email mentioning doing port of work from cp67 to vm370 ... and doing my own"csc/vm" product distrubtion for internal customers:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

misc. past post mentioning paged map filesystem work
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sun, 13 Mar 2011 21:28:29 -0400
Bill Findlay <news@findlayw.plus.com> writes:
So, hearsay?

Can you suggest why IBM bought out the patents for paging from Manchester University, if they believed that it didn't work?


re:
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory

didn't say that it didn't work ... said that it wan't working well (statement would have predated cp40 work in 1966); then TSS/360 single-level-store turned out to be not working well ... and I would claim that cp40 & cp/67, while somewhat better than TSS/360, also didn't work very well until I rewrote cp67 implementation ... including global LRU replacement ... past post/reference to global LRU
https://www.garlic.com/~lynn/subtopic.html#wsclock

some assumption that TSS/360 and/or CP67 at least used what was known from Atlas ... then that would be evidence that none of them were working all that well. I would contend that at least quality page thrashing control technology and quality page replacement strategies were still lacking in the mid-60s.

page thrashing was still issue in late 60s (i.e. which apparently hadn't been addressed earlier) ... both academic and what I was doing. various mechanisms for adding effective controls to limit page thrashing (especially in large/high multitasking environment). The academic page thrashing controls from the late 60s were coupled with local LRU replacement algorithms. What I was doing simultaneously but I coupled page thrashing controls with global LRU replacement algorithms.

since I don't have access to the material that would have prompted the original statement ... I can only make some inference about the early 60s state-of-the-art ... based on subsequent state-to-the-art from the mid-60s and late-60s.

The dustup over stanford PHD thesis for global LRU in the early 80s ... was because ... at least the local vis-a-vis global page replacement issue was still going on nearly 20yrs after Atlas (although I had done my work less than decade after Atlas).

CP67 Release 1 delivered to univ. Jan68 appeared to have lots of stuff from CTSS ... although CTSS swapped tasks ... not paged. Totally lacked page thrashing control and basically used FIFO replacement.

CP67 Release 2 was shipped with changes from Lincoln Labs that drastically simplified dispatching and reduced overhead ... also added primitive page thrashing controls (fixed limit on multitasking based on real storage size). Page replacement was still pretty much FIFO.

I put in a form of dynamic adaptive working set page thrashing control (but different from what was going on in academia and published in ACM at the time) as well as global LRU replacement.

misc. stuff:
https://en.wikipedia.org/wiki/Belady%27s_anomaly

this has paging IBM Systems Journal article from 1966 by Belady that makes mention of including some part of Atlas in simulation but doesn't provide any substantial description (and there is no mention of page thrashing controls):
http://www.google.com/url?sa=t&source=web&cd=4&ved=0CDwQFjAD&url=http%3A%2F%2Fusers.informatik.uni-halle.de%2F~hinnebur%2FLehre%2FWeb_DBIIb%2Fuebung3_belady_opt_buffer.pdf&rct=j&q=page%20replacement%20history%20belady%20atlas&ei=1Wh9TZSELMSw0QG00KDpAw&usg=AFQjCNHSgCh7YUNp6jRtJwDFsoHKFAgskw&cad=rja

IBM has moved all its online System and R&D Journals to IEEE ... and accessing now requires IEEE membership (or be current IBM employee). This is 1981 "History of Memory Management" by Belady, Parmelee, and Scalzi (Parmelee was at science center in 60s and is mentioned in Melinda's history).
http://domino.research.ibm.com/tchjr/journalindex.nsf/0b9bc46ed06cbac1852565e6006fe1a0/39ddbeca15ffafed85256bfa0067f4d7!OpenDocument
article at IEEE
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5390584

I just found this reference:
http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf

from above:
Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.

... snip ...

The science center initial CP40 implementation was adding associative virtual memory hardware to 360/40. A difference between the Atlas hardware implementation and the implementation for 360/40 was that the 360/40 implementation included a process identifier.

The implication from above could be that ATLAS totally swapped all virtual pages anytime it switched users/virtual-address space ... not attempting concurrent users/tasks. Such a implementation would also not need dynamic limit on concurrent executing tasks as page thrashing control (aka some form of working set control). If it did LRU replacement, there would be no difference between local & global.

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sun, 13 Mar 2011 22:49:37 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
The science center initial CP40 implementation was adding associative virtual memory hardware to 360/40. A difference between the Atlas hardware implementation and the implementation for 360/40 was that the 360/40 implementation included a process identifier.

re:
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory

360/67 had CRO which was the segment table pointer (or STO, segment table origin) and eight associative array entries. on task switch, CRO was reloaded for different virtual address space ... which would automatically invalidate all array entries (individual entries didn't carry process id).

370 moved STO to CR1 (using CR0 for bits specifying 2kbyte pages or 4kbyte pages and 64kbyte or 1mbyte segments). 370/165 had a 7-entry "STO stack" and 128 entry look-aside buffer (instead of associative array) that was four way associative; used five bits from virtual address to index one of the 32 sets of four entries ... and then lookup on those four entries. Each entry had 3bit virtual space identifier (mapped to the 7-entry "STO stack") and the virtual to real address translation. Loading new CR1 STO (switch task/virtual address space) ... would find its corresponding entry in the STO-stack and then use that 3-bit identifier. If it wasn't already in the STO-stack ... it would choose one of the seven entries to replace and invalidate all TLB-entries with that 3-bit identifier).

MVS design had 8mbytes for kernel and supposedly 8mbytes for kernel ... so one of the TLB index bits was the 8mbyte bit (allowing 64 TLB entries for kernel and 64 TLB entries for application). some more detail discussed here
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory

however CMS started at low address and worked up, most applications of the period rarely got over 1mbyte ... so at least half the 370/165 TLB entries would typically go unused in a CMS intensive environment.

I'm giving my 7Oct1986 (European user group) SEAS "VM Performance History" presentation next weds. at local (user group) HILLGANG meeting. The original was in (foildoc) script (GML) ... GML controls for what was "large" print and the backup "small" print text (aka notes). It would be printed on plan paper for handouts ... and then just the "foils" section would be printed on plan paper ... the "foils" would then be run thru foil copier. Early foil copiers had manual transparency layed over the plan paper copy and then run through machine that heated the combination transforming the black lettering to the transparency. Later on, you could load stack of transparencies in copier machine and make transparency copies similar to the way regular copies were made.

In any case, I've been manual cut&pasting the GML image into powerpoint. Current notes from a "Work as undergraduate" "foil":

Over the two years that I worked on CP/67 at WSU, I designed and implemented numerous modifications to CP and CMS, many in the area of performance (I was also very active in several other areas, in editors, I modified the standard CMS editor to drive a 2250-3 for full-screen support. I also rewrote the editor to be completely re-entrant and imbedding it in HASP for CRJE support. I wrote the original ASCII terminal support for CP and someplace I am blamed with being part of the team that developed OEM control unit for IBM 360s

In the performance arena, I worked on several areas, a) generalized path length reduction, b) fastpath - specialized paths for most frequently encountered cases, c) control data structures that would minimize CPU overhead, d) identifying closed CP/67 subroutines and modifying them to use pre-allocated savearea in page 0, and changing their callers to use BALR rather than SVC, e) improving the page replacement algorithm to use reference bits & global LRU (rather than FIFO), f) implementing feedback/feedfoward controls in decision making. The dispatcher changes implemented code that implicitly took advantage of which possible virtual machines might require status updates. CPEXBLOKs were also placed on a master chain instead of being chained off the UTABLE. Finally an explicit in-q chain was created

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sun, 13 Mar 2011 23:11:17 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
The science center initial CP40 implementation was adding associative virtual memory hardware to 360/40. A difference between the Atlas hardware implementation and the implementation for 360/40 was that the 360/40 implementation included a process identifier.

re: re:
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory

previous post discussed 360/67 associative array ... hardware dynamic LRU replacement of one of the eight associative array entries from the segment & page table entries in real storage. 370/165 table-look-aside was somewhat similar but hardware dynamic LRU replacement of one of the four indexed entries (for virtual address not currently loaded in hardware).
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Mmeory

the 360/40 implementation was different ... it had one hardware associative entry for each real page. when virtual page was "assigned" to real page, the corresponding real page entry was loaded with the virtual page number along with the process (i.e. virtual address space) ID (making the 360/40 and Atlas hardware implementation apparently very similar except the 360/40 having process id ... as noted above)

from Melinda's history:
Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred.

The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle. The modifications to the 360/40 would prove to be quite successful, but it would be more than a year before they were complete. Dick Bayles has described the process that he and Comeau and Giesin went through in debugging the modifications).


... snip ...

360/40 modifications and cp40 was done well before availability of standard 360/67 product with virtual memory. when 360/67 became available, cp40 morphed into cp67 (and handling of virtual memory was modified to correspond to 360/67 hardware).

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sun, 13 Mar 2011 23:51:47 -0400
John Levine <johnl@iecc.com> writes:
I used TSS at Princeton in the late 1960s. The paging worked, but the performance was awful, so it didn't work well.

IBM apparently said it'd support 50 users, but the reality was about eight of us at a time. Fortunately the 2741 terminals were unreliable enough that there were rarely more than 8 working at once.


re:
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#4 Multiple Virtual Memory

The SE at the univ. was doing a lot of TSS testing the same time I was doing cp67 ... so we had to share & trade-off the 360/67 on weekends.

We did a synthentic script for fortran program edit, compile and execute with wait times inserted between emulated terminal operations

I had better thruput and response for 30 CMS users running the script than he did with 4 TSS users running the (essentially same) script ... this was early on spring '68, before I had done hardly any improvements to cp67.

There was huge bloat in TSS/360 pathlength, there was lots of stuff that were being done rote w/o really good understanding of why, the fixed kernel real storage size had bloated and the single-level-store stuff had gone really crazy (if you have 16mbytes to do anything, try to use it all ... regardless of what you do).

tss/360 was originally supposed to run on 512kbyte 360/67 ... but kernel bloat size required the 360/67s be upgraded to 768kbyte (minimum). 360/67 single processor was effectively same as 360/65 and could only get 1mbyte max ... but a pair of 360/67s in multiprocessor would have 2mbyte max.

one of the major tss players tried to gloss over a huge problem by claiming that a two processor, 2mbyte tss system 3.8 times throughput over a one processor, 1mbyte tss system ... was because tss had the best multiprocessor support on the planet.

The real problem was that tss/360 bloat was so bad it pretty much page thrashed in 1mbyte machine (regardless of what you did). 2mbyte system finally had enough storage (after fixed kernel requirements) to actually get some work done (the 3.8 times improvement wasn't because the multiprocessor support was the best on the planet ... 3.8times on twice the hardware ... it was because it was no longer page thrashing ... but even at 3.8times, it was still much worse than cp67).

Note that while 2mbytes eliminated the worst of the page constrained operation ... the pathlength bloat was still enormous (compared to cp67/cms) and every application on tss tended to have several times the working set of the cms equivalent (on 2mbyte machine while it was no longer page thrashing to death ... it still required very large number of page operations to get anything done).

Much later in the tss/370 days ... there was significant work done on tss pathlength bloat ... while vm370 pathlength bloat was increasing. shows up in some of this '80s comparison I did ... included in this past post
https://www.garlic.com/~lynn/2001m.html#53 TSS/360

other posts mentioning above TSS analysis
https://www.garlic.com/~lynn/2001n.html#18 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002l.html#14 Z/OS--anything new?
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2007c.html#23 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007f.html#18 What to do with extra storage on new z9
https://www.garlic.com/~lynn/2007g.html#72 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007k.html#46 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007p.html#65 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2008m.html#63 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2009f.html#37 System/360 Announcement (7Apr64)
https://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2010.html#86 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010k.html#51 Information on obscure text editors wanted
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011b.html#78 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM100 - Rise of the Internet

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Mar, 2011
Subject: IBM100 - Rise of the Internet
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet

other internet trivia (thread between supercomputers and electronic commerce)

two of the people mentioned in this reference to jan92 meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13

later leave oracle and show up at small client/server startup responsbile for something called "commerce server". we get brought in to consult (we had left also by that time) because they want to do payments on their server; the startup had also invented some technology called "SSL" they want to use (the result is now frequently called electronic commerce).

When IBM dissolved SBS ... the birds went to Hughes (now at Boeing) and the telephone business went to MCI. MCI was also partner in the bid response for the NSFNET backbone (operational precursor to modern internet)

MCI had also provided the funding for the "commerce server" development ... the initial implementations was a "mall paradigm" capable of supporting large number of different merchants. MCI was looking to provide online ecommerce hosting for lots of merchants.

Since then, that MCI
https://en.wikipedia.org/wiki/MCI_Inc
morphed
https://en.wikipedia.org/wiki/MCI_Inc.
getting up there with some others
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/wcom/
http://www.pbs.org/now/politics/corpscandalupdates.html

A single-store "commerce server" was also done for merchants that wanted to run their own operation.

part of "commerce server" and "electronic commerce" was something called "payment gateway" ... which acts as gateway between ecommerce servers on the internet and the payment networks. I've periodically referred to "payment gateway" as the original SOA ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

I actually miss working at IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Mar, 2011
Subject: I actually miss working at IBM
Blog: Greater IBM
previous post about being epidemic:
https://www.garlic.com/~lynn/2011d.html#78 I actually miss working at IBM

posted recently by somebody: "Enron was a dry run and it worked so well it has become institutionalized" ... of course MCI/WORLDCOM was right up there also. Supposedly SOX was passed to prevent similar events in the future. However, apparently because GAO didn't think SEC was doing anything, it started doing reviews of public company financial filings and reports on significant uptick in filings that were fraudulent (or possibly just major audit errors; things that SOX was billed as preventing even for executives going to jail). The motivation was boosting executive compensation and even if correct financials were later refiled, executive compensation wasn't corrected.

Semi-factious choice: 1) SOX had no effect on fraudulent financial filings 2) SOX motivated the significant uptick in fraudulent financial filings, 3) If it weren't for SOX, all public company filings would be fraudulent.

Specifically for the financial industry there is lots of press that gov. shouldn't be playing in executive bonuses. Note that NY state comptroller published report that during the financial bubble that there was more than a 400% spike in wallstreet bonuses and there has been enormous pressure since the bubble burst for bonuses to not return to pre-bubble level. Much of the wallstreet spike in compensation came from fees & commissions in the estimated $27T in triple-A rated toxic CDO transactions (where the triple-A ratings were being "bought" ... both the sellers and the rating agencies knew they weren't worth the triple-A ratings ... from Oct2008 congressional hearings). Note that SOX also required SEC to do something about the rating agencies ... but there (also) doesn't appear to have been anything done except for a report.

oh, I also recently referred to MCI in "IBM100 - Rise of the Internet" thread ... as well as SBS. There was joke about SBS that so many IBMers transferred to SBS ... they had to recreate IBM's 14-level management hierarchy ... but for only 2000 people ... half the company were director and above (the massive IBM infrastructure polluting everything it touched). One of the final acts before SBS was dissolved (and phone business going to MCI), was sending the whole salesforce & spouses to 100% club (except it really needed to be called the 0% club). posts in that thread:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

past posts in this thread:
https://www.garlic.com/~lynn/2010o.html#79 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#50 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#52 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#57 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM
https://www.garlic.com/~lynn/2011.html#0 I actually miss working at IBM
https://www.garlic.com/~lynn/2011c.html#28 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#54 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#85 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#13 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#15 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#16 I actually miss working at IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 14 Mar 2011 11:55:25 -0400
"Dave Wade" <dave.g4ugm@gmail.com> writes:
I used MTS on the 360/67 at Newcastle University and when we started to page it slowed. Demand Paging as a way to extend real store still gives issues today. The paging rate is the first counter you look at on a sad Windows server. One thing that made it worse in the 60's was the fact that Fortran stores its arrays in such away that varying the last subscript fastest tends to access multiple storage pages, and as thast the natural way to write code, much niave fortran sucked in perforance terms.

processor caches are now larger than 360/67 storage/memory.

note that one of the previous references mentioned that for number some number of recent windows releases ... they were doing FIFO replacement algorithm

recent mention of rewriting "routes" for major airline res system
https://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System

and getting 100 times performance improvement. secret was compressing information for all flt segments for all commercial scheduled flts in the world (everything on OAG tape) and all commercial airports ... so that it could be memory resident. Initial implementation was only 20 times faster. I then re-org'ed the operation so that it was aligned with the processor cache operation and got another factor of five improvment (for 100 times overall).

the science center had ported apl\360 to cms for cms\apl ... eliminating all the code in apl\360 for doing its own terminal support, multi-tasking, swapping, etc. However, the base apl\360 environment were 16kbyte (sometimes 32kbyte) workspaces that were swapped as single unit. apl\360 workspace storage management involved allocating a new storage location on every assignment ... and when workspace was exhausted, do garbage collection to compress all allocated storage to contiguous area and then start all over. this work reasonably well for swapping enviornment, but was disaster in larger virtual memory, demand paged environment. every apl application was just about guaranteed to quickly touch nearly every page in the virtual address space repeatedly over small period of time (in effect, working set size always became the same as the virtual memory size, regardless the size of the apl application). so one of the other things that had to be done was adapt cms\apl storage operation to large virtual memory demand paged environment.

science center had a bunch of virtual memory monitoring tools, as well as virtual memory modeling tools and simulators (i.e. do real traces and run them thru simulators testing large various of different algorithms). one of the tools took application I&D memory trace ... along with application load map ... and did semi-auto program re-organization for optimal performance in demand page virtual memory enviornment. Part of this tool included display & analysis of the I&D memory trace ... and was used in the cms\apl storage rework.

the tool was later released in mid-70s as customer product called VS/Repack (for its semi-automated program reorganization). It was also used internally by major os/360 compilers, applications, subsystems, dbms for their migration from real storage/memory to 370 virtual memory environment.

past posts in this thread:
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#4 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#5 Multiple Virtual Memory

misc. past posts mentioining vs/repack:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006j.html#18 virtual memory
https://www.garlic.com/~lynn/2006j.html#22 virtual memory
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006l.html#11 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance
https://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program
https://www.garlic.com/~lynn/2006x.html#1 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006x.html#16 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
https://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
https://www.garlic.com/~lynn/2007o.html#53 Virtual Storage implementation
https://www.garlic.com/~lynn/2007o.html#57 ACP/TPF
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#35 Interesting Mainframe Article: 5 Myths Exposed
https://www.garlic.com/~lynn/2008e.html#16 Kernels
https://www.garlic.com/~lynn/2008f.html#36 Object-relational impedence
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008m.html#69 Speculation ONLY
https://www.garlic.com/~lynn/2008q.html#65 APL
https://www.garlic.com/~lynn/2010j.html#48 Knuth Got It Wrong
https://www.garlic.com/~lynn/2010j.html#81 Percentage of code executed that is user written was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010k.html#8 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#9 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?

--
virtualization experience starting Jan1968, online at home since Mar1970

I actually miss working at IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Mar, 2011
Subject: I actually miss working at IBM
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#7 I actually miss working at IBM

"Enron was a dry run and it worked so well it has become institutionalized"

separate from SEC doing little or nothing (even when SOX supposedly required them to) ... other Enron tidbits:

"Mr" did bank modernization act ... which included repeal of Glass-Steagall. Then when the head of CFTC proposed regulating commodities, "Mrs" was appointed replacement. Then "Mr" did commodities modernization act prohibiting commodity regulation (billed as loophole/favor for Enron, but also played significant role all during the past decade) ... at which time, "Mrs" resigns and joins Enron board & member of audit committee.

"Mr" bank & commodity modernization
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html
commodity regulation proposal and "Mrs" replaces chairperson
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I
Enron "loophole" and then "Mrs" resigns and joins Enron board
http://www.nytimes.com/2008/11/17/business/17grammside.html
"Mr" & "Mrs" Enron "favor"
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 14 Mar 2011 16:18:01 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
The science center initial CP40 implementation was adding associative virtual memory hardware to 360/40. A difference between the Atlas hardware implementation and the implementation for 360/40 was that the 360/40 implementation included a process identifier.

re:
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#4 Multiple Virtual Memory

the virtual memory hardware for 360/40 ... at least appeared to add "process identifier" for its "inverted table" implementation (compared to atlas) ... allowing for multiple concurrent executing tasks/users (aka multiple different virtual address spaces) ... separate from the operational issues dealing with multiple concurrent executings tasks/users ... like choice of replacement algorithm and mechanism for controlling page thrashing (caused by contention from excessive/high multi-task/user levels).

re:
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#5 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory

one of the downsides in the single-level-store, demand page paradigm ... are large commercial applications that sequentially process large amounts of data. In the file paradigm ... it is relatively straight-forward to do large block asyncrhonous double buffering (both read ahead and write behind) ... allowing overlap between processing and tranfers as well larger/more efficient transfers (also processed data is quickly discarded by being overlayed with subsequent i/o operations) In the single-level-store, demand paging ... this processing slows down significantly with synchronous operation for page read at a time and data that has been finished with tends to linger around. To boost single-level-store, demand page performance for this type of operation requires some sort of application operational hints ... and/or sophisticated system heuristics to recognize things like sequential processing.

for other topic drift ... later 801/risc inverted tables used segment/pto associative (instead of process/address space associative).

i've frequently contended that a lot of 801 features where hardware simplification trade-offs to be the opposite of what had been attempted in failed "Future System" effort.

Part of this was "inverted tables" ... but with "segment associative" instead of process/address-space associative ... i.e. there were 16 "segment registers" ... each containing a "segment id" ... 801 with 32bit virtual address ... which use the high four bits of the virtual address to select a "segment id" from corresponding segment register. The remaining page number bits (4k pages so 32-12-4 ... 16bits) would be combined with the "segment id" (12 bits on ROMP used in the pc/rt) to find corresponding real page. This allowed for process to have some number of process-specific, "private" segments ... but also some number of "shared" segments ... where all processes that shared the same segments would use the same "segment-id".

In the 70s, I was doing all this stuff with paged mapped and "segment sharing" on 360/370 (first with cp67 and then with vm370). In the late 70s, i got into small dustup with the 801 people over their relatively small number of segments. Setting up shared objects in virtual address space required privileged kernel calls to validate permissions ... so there tended to be some trend to relatively long-lived sharing to amortize the cost of the kernel call ... but there also tended to be relative large number of possible shared things ... so there needed to be smaller granularity and more of them (24bit 370 addressing could have 256 64kbyte shared segments).
https://www.garlic.com/~lynn/submain.html#mmap
and
https://www.garlic.com/~lynn/submain.html#adcon

The 801/risc counter was that it was designed to do all privilege validation by the compiler ... and the loader would guarantee only "correct" programs were loaded for execution ... thus the hardware/system needed no protection feature ... and applications could have inline switching of segment values ... as easy as changing address pointer in general purpose registers (no kernel call overhead to amortize, changing part of address space access as changing address/pointer values).

This sort of fell apart with the 801/romp displaywriter follow-on was killed and it was decided to retarget to unix workstation. running unix required hardware protection paradigm (between kernel and application) for permission enforcement. It then also lost the ability to do inline application switching of segment-id values ... and required kernel calls and permission validation. So one of the things investigated for the unix market was how to aggregate lots of small (shared) objects into much larger (shared) application library ... that was better fit with the (relatively small number of) 256mbyte segments.

misc. past email mentioning 801/risc ... including some referring to methodology for shared object packing.
https://www.garlic.com/~lynn/lhwemail.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 14 Mar 2011 17:26:52 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
for other topic drift ... later 801/risc inverted tables used segment/pto associative (instead of process/address space associative).

i've frequently contended that a lot of 801 features where hardware simplification trade-offs to be the opposite of what had been attempted in failed "Future System" effort.


re:
https://www.garlic.com/~lynn/2011e.html#10 Multiple Virtual Memory

trying to work out packing "shared objects" for 801/romp:

Date: 11/16/84 07:22:14
From: wheeler

re: relocate;

what I'm looking for is a data flow of which lines go where out of the virtual address, thru the segment regs, thru the tlb, and off to the cache (along with the size & "how many associative" for the tlb and the cache). With that information I can verify that the design will work in the hardware. I also need a detailed description of how the tlb miss hardware will search the inverted table ... to make sure the software can do its job.

I think we have it all worked out ... but we need detailed specs. to verify what we have will work.


... snip ... top of post, old email index

Date: 11/27/84 06:24:44
From: wheeler

i got xxxxx to explain the inverted table to me yesterday. Will have replacement for RMP001 sometime later today ... with possibly some of the software area discussed.


... snip ... top of post, old email index

"xxxxx" was "father" of 801/risc

"RMP001" document that I was doing that included "processor cluster"

email from couple days earlier (14nov) ("romp small shared segments")
https://www.garlic.com/~lynn/2006y.html#email841114c
and another followup from the 27th
https://www.garlic.com/~lynn/2006y.html#email841127

from this old post
https://www.garlic.com/~lynn/2006y.html#36 Multiple mappings

old post referencing "processor cluster" part:
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor

recent references to "processor clusters"
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#55 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#20 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#27 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
https://www.garlic.com/~lynn/2011c.html#54 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#69 The first personal computer (PC)
https://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#24 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Tue, 15 Mar 2011 10:32:43 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
one of the downsides in the single-level-store, demand page paradigm ... are large commercial applications that sequentially process large amounts of data. In the file paradigm ... it is relatively straight-forward to do large block asyncrhonous double buffering (both read ahead and write behind) ... allowing overlap between processing and tranfers as well larger/more efficient transfers (also processed data is quickly discarded by being overlayed with subsequent i/o operations) In the single-level-store, demand paging ... this processing slows down significant with synchronous operation for page read at a time and data that has been finished with tends to linger around. To boost single-level-store, demand page performance for this type of operation requires some sort of application operational hints ... and/or sophisticated system heuristics to recognize things like sequential processing.

re:
https://www.garlic.com/~lynn/2011e.html#10 Multiple Virtual Memory

so I had done quite a bit of that for cms paged mapped filesystem ... which contained two parts ... the stuff in cms ... and other stuff in the cp kernel that provided interface to low level paging subsystem.
https://www.garlic.com/~lynn/submain.html#mmap

now the internal network (larger than arpanet/internet from just about the beginning until late '85 or early '86) was primarily vm rscs/vnet. rscs/vnet implementation leveraged the cp "spool" filesystem ... which underneath mapped into low-level paging 4k block transfers. part of the characteristic of the spool operation was "synchronous" for 4k/page transfers ... thruput characteristics very akin to synchronous demand-page page faults (rscs/vnet non-runnable during block transfers). This would limit typical RSCS/VNET to aggregate thruput of around 30kbytes (5-8 4k) per second (say 300kbits). In the days of 9.6kbit links it wasn't a significant issue.

however, for the hsdt stuff i was doing
https://www.garlic.com/~lynn/subnetwork.html#hsdt

single full-duplex T1 was 300kbit aggregate (typo, 300kbyte aggregate) ... with multiple and faster links ... needed closer to multi-mbyte thruput.

so looking at nsfnet backbone type stuff for multi-mbyte thruput
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

but for vnet/rscs supporting such aggregate thruput ... the backend spool bottleneck had to be "fixed" ... basically provide vnet/rscs with "spool" paradigm that operated more like cms paged-mapped filesystem ... allowing asynchronous operation, multi-page tranfers, contiguous block allocation, read-aheads and write-behinds.

While I could deploy on HSDT backbone nodes ... i then tried to get the changes deployed on the corporate RSCS/VNET backbone ... to really break free the rest of corporate network ... old email
https://www.garlic.com/~lynn/2011.html#email870306
in this (linkedin) Greater IBM discussion/post
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

as referenced in the above, there were enormous forces in-play narrowly focused on converting internal network to SNA (some of it was mis-information to the executive board ... like "PROFS was a VTAM application" ... in the same time-frame there was also a bunch of mis-information about the applicability of SNA for the NSFNET backbone).

other posts in the above thread:
https://www.garlic.com/~lynn/2011.html#1 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#5 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011d.html#2 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011d.html#5 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011d.html#41 Is email dead? What do you think?

old email mentioning internal network
https://www.garlic.com/~lynn/lhwemail.html#vnet

old email mentioning nsfnet backbone
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

for other drift ... (linkedin) Greater IBM thread on NSFNET backbone:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Tue, 15 Mar 2011 11:34:54 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
single full-duplex T1 was 300kbit aggregate ... with multiple and faster links ... needed closer to multi-mbyte thruput.

re:
https://www.garlic.com/~lynn/2011e.html#12 Multiple Virtual Memory

finger-slip/typo ... T1 is 1.5mbit, full-duplex 3mbit, 300kbyte aggregate ... tens times the typical RSCS/VNET thruput ... and I needed at least ten times that ... 100 times typical RSCS/VNET thruput from the spool file system ... discussed in (linkedin) Greater IBM post
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Tue, 15 Mar 2011 19:44:35 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
While I could deploy on HSDT backbone nodes ... i then tried to get the changes deployed on the corporate RSCS/VNET backbone ... to really break free the rest of corporate network ... old email
https://www.garlic.com/~lynn/2011.html#email870306
in this (linkedin) Greater IBM discussion/post
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?


re:
https://www.garlic.com/~lynn/2011e.html#12 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#13 Multiple Virtual Memory

part of the thread (linkedin) Greater IBM thread (on internal VNET/RSCS network)
https://www.garlic.com/~lynn/2011d.html#2 Is email dead? What do you think?

reference from ibm jargon in above
notwork - n. VNET (q.v.), when failing to deliver. Heavily used in 1988, when VNET was converted from the old but trusty RSCS software to the new strategic solution. To be fair, this did result in a sleeker, faster VNET in the end, but at a considerable cost in material and in human terms. nyetwork, slugnet slugnet - n. VNET (q.v.) on a slow day. Some say on a fast day, and especially in 1988. notwork, nyetwork

... snip ...

there was several comments about that eventually the SNA conversion was "better" network ... however, they put in enormous additional resources.

my counter (in the thread) was that it would have been significantly more cost/effective and efficient to have converted the RSCS/VNET links to TCP/IP (than to SNA, the conversion to SNA was pointless effort ... dispite the enromous mis-information to the contrary).

The base mainframe tcp/ip support may have had some performance issues ... but in that time-frame ... I had done RFC1044 support in mainframe product and in some tuning tests at Cray Research ... was getting channel media mbyte/sec thruput between 4341 and Cray, using only modest amount of the 4341 (possibly 500 times improvement in ratio of cpu instructions executed per byte transmitted). misc. past post mentioning RFC1044 support:
https://www.garlic.com/~lynn/subnetwork.html#1044

other past post mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Mar, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened.
Blog: Mainframe Experts Network
VAX sold into the mid-range against IBM 4300 machines ... and in similar numbers for orders of small number of machines. The big boost in 4300 machines (over VAX) were large corporate orders involving hundreds of machines ... and represented somewhat the leading edge of distributed computing. Internally, so many 4300 machines were going into departmental conference rooms ... that conference rooms became a scarce resource. The enormous number of these (internal) departmental/distributed 4300 contributed to big explosion in the size of the internal network (which had been larger than the arpanet/internet from just about the beginning until late 85 or early 86). Decade of vax sales (sliced&diced by year, model, us/non-us
https://www.garlic.com/~lynn/2002f.html#0

By the mid-80s, mid-range market was moving to large PCs and workstations ... as can be seen in the vax numbers (also 4361/4381 never had the expected sales that the 4331/4341 saw) ... by the time of the 9370(1986) ... vax sales were dropping into the hundreds (except for "microvax")

internal 4341 cluster implementations could easily kill 3033, better price/performance, higher aggregate MIPs, higher aggregate storage, more channels, better channels, smaller footprint, etc. Both internal distributed computing and cluster support were preventing from shipping or crippled because threat to 3033 (and pretty much carried over to 3081). at one point, POK got Fishkill to cut allocation in half for critical 4341 component (capping the numbers they could sell).

Note that touting the huge number of channels is somewhat featuring a bug in the disk channel i/o architecture (a trade-off left over from original 360). It is possible to revise how disk I/O is done and run multiple concurrent disk I/O on serial interfaces up until saturating media transfer rate (reducing the problem to number of serial interfaces to meet aggregate transfer requirements w/o needing large number of channels).

the 3880. disk controller had support for 3mbyte/sec data transfer ... but everything else was extremely sloow ... enormously driving up channel busy. it was so bad that 3090s realized that they add to had a whole bunch (unplanned for) extra channels ... which added an additional TCM to 3090 manufacturing. There were jokes that POK was going to charge the 3880 product for the additional 3090 manufacturing cost. That was somewhat the leading edge of touting large number of channels (but is was actually done to compensate for an enormous problem).

in the 90s, (at least) the financial industry spent billions on business process re-engineering of legacy mainframe software ... billed as moving to "killer micros". The issue is a lot of online transactions were really added front-ends to legacy "batch" backend settlement ... which was performed in batch process overnight. With load increases and globalization, the amount of work was increasing during the overnight window ... and there was lots of pressure to shorten the length of window for batch settlement. The re-engineering was to be straight-through processing for transactions (eliminating overnight batch window) running in parallel on large number of "killer micros". However, it turns out that the parallelizing technology being used had 100 times the overhead compared to legacy (mainframe) Cobol batch ... totally swamping the anticipated throughput increases. The failure of those projects setback re-engineering efforts for more than decade. While there was huge amount of publicity with the start of the "killer micro" efforts ... there wasn't the equivalent publicity about the monumental failures.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Mar, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened.
Blog: Mainframe Experts Network
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

My wife had been con'ed into going to POK to be in charge of loosely-coupled architecture. While there she did peer-coupled shared data architecture ... which except for IMS hot-stanby, saw very little uptake until sysplex (and parallel sysplex), contributing to her not remaining long in the position. some past posts
https://www.garlic.com/~lynn/submain.html#shareddata

Later we were doing ha/cmp & cluster scale-up ... old post with reference to meeting on the subject in Ellison's conference room Jan92
https://www.garlic.com/~lynn/95.html#13

Related discussion (archived here) from (linkedin) Greater IBM group ... discussing "IBM Watson's Ancestors"
https://www.garlic.com/~lynn/2011d.html#7
https://www.garlic.com/~lynn/2011d.html#24
https://www.garlic.com/~lynn/2011d.html#29
https://www.garlic.com/~lynn/2011d.html#40

nearly all current supercomputers are large number of "killer micros" of various sorts ...

there are reports detailing Google (and others) doing large mega-datacenters with huge number of racks with carefully selected components for price & reliable ... about 1/3rd cost of similar preloaded racks from named vendors.

new generation of "z196" being able to have large number of racks with mixture of different kinds of processors.
http://www-03.ibm.com/systems/z/hardware/zenterprise/z196.html

I had worked on something similar in 84/85, using 370 ROMAN chip set and 801 Blue Iliad chip set for large number of processor clusters in racks.

past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

recent mainframe history discussion of IMS and DB2 ... archived here
https://www.garlic.com/~lynn/2011d.html#52
https://www.garlic.com/~lynn/2011d.html#54
https://www.garlic.com/~lynn/2011d.html#55

as noted, IMS was done at customer site ... supporting moon mission and transferred to STL. STL in the wake of FS failure was doing massive, grand & glorious EAGLE DBMS effort ... while a couple miles away there was original relational/SQL System/R effort (all done on vm/cms). With all the strategic focus on EAGLE, was able to do tech transfer to Endicott and get it slipped out as SQL/DS (dos, vs1, vm/cms). Eventually after the demise of EAGLE, System/R was asked how fast could they ship on MVS. In addition, one of the Oracle executives mentioned in the Jan92 meeting
https://www.garlic.com/~lynn/95.html#13

said that while he was at STL, he did the SQL/DS tech transfer from Endicott back to STL (as part of DB2 effort).

Because of the enormous focus on EAGLE, any effort to do relational on MVS was rather late ... mad rush to get something out on MVS (in lieu of failed EAGLE) ... MVS/DB2 didn't ship until 1983
https://en.wikipedia.org/wiki/IBM_DB2

when we were doing ha/cmp, I had coined the terms disaster survivability and geographic survivability ... and was also asked to write a section for the corporate continuous availability strategy document. However, it was pulled because both Rochester (as/400) and POK (mainframe) complained that they couldn't meet the objectives (DB2 group also complained that if I went ahead with parallel oracle activity ... it would be at least five yrs ahead of mainframe DB2).
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Mar, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened.
Blog: Mainframe Experts Network
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Old reference to Jim being father of modern financial data processing (also responsible for TPC benchmarking) ... for his work on formalizing transaction "semantics" (providing auditors with higher level of trust in dataprocessing records)
https://www.garlic.com/~lynn/2008p.html#27
referencing celebrating Jim earlier in the year at Berkeley..

Much earlier when Jim was leaving ... he was palming bunch of stuff on me ... including consulting with the IMS group in STL ... and interfacing with (some financial) customers running early RDBMS ... a couple old emails from the period:
https://www.garlic.com/~lynn/2007.html#email801006 ,
https://www.garlic.com/~lynn/2007.html#email801016

The largest single-system-image (mainframe) system in the late 70s was the (internal) US HONE (worldwide sales&marketing support) datacenters that had been consolidated in silicon valley in the mid-70s. By late 70s, large disk farm with as many large mainframe APs (two processor SMP) as could be connected into 8-tail disk (load-balancing and fall-over across the complex). Then in early 80s, with some concern for environmental (earthquakes), the complex was replicated in Dallas and then a 3rd in Boulder (with load-balancing and fall-over between the three sites).
https://www.garlic.com/~lynn/subtopic.html#hone

For some trivia ... do satellite photo search for facebook's address in palo alto ... its a new bldg ... however right next to it is a much older bldg. ... which used to be the old consolidated HONE datacenter (has a different occupant now).

The palo alto address isn't facebook's mega-datacenter ... which goes in near where some of the other distributed mega-datacenters have been going in. Its these mega-datacenters which have been pioneering processing, power, energy, footprint, cooling, etc optimization for the past decade or more.

--
virtualization experience starting Jan1968, online at home since Mar1970

End of an era

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: End of an era
Newsgroups: alt.folklore.computers
Date: Thu, 17 Mar 2011 16:28:13 -0400
re:
https://www.garlic.com/~lynn/2011d.html#83 End of an era

some more recent reference on pentagon spending (be cspan over the weekend)
http://www.phibetaiota.net/2011/03/event-19-20-mar-c-span-pentagon-labyrinth/
http://dnipogo.org/labyrinth/

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Mar, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened.
Blog: Mainframe Experts Network
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

A few yrs ago, I took some technology & demonstrations to (financial) industry body that would do straight-through processing (eliminating overnight batch window), it heavily relied on latest generation of parallelized RDBMS technology, to get both throughput and overhead (compared to legacy batch Cobol) down to possibly only five times (from earlier efforts that were 100 times the overhead). The technology involved decomposing/translating business processes into fine-grain SQL operations ... and then relying on RDBMS technologies to handle the parallelization. It easily met the throughput and price/performance objectives of the (original failed) efforts from the 90s.

However, as it was taken up through various institutions, it started to meet increased resistance (lots of references to disbelief based on the failures in the 90s). It appeared like the scars ran so deep that it may have to wait for a new generation of IT executives.

from ibm jargon:
low acoustics - n. Quietness. From the 9370 blue letter: The rack-mountable IBM 9370 processor is uniquely designed for an office environment, having low floor space and power requirements, low acoustics, and an attractive, modular, systems package.

... snip ...

even tho the mid-range market window was quickly closing by the time of the 9370 ... although current day you can also see rack accoustic panels as feature/option.

... and big part of the mega-datacenters pioneering lots of price/performance and cost-effective techniques is that their scale of operation is so huge ... that even minor improvements result in significant dollars.

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Fri, 18 Mar 2011 12:07:34 -0400
re:
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory

from long ago and far away, earlier this week I (re-)gave a user group oct86 presentation on vm performance history (with a couple additional background history comments ... many of which could be familiar from old email I've posted here over the past several years). the original was in gml "foildoc" ... converted to powerpoint (actually open office impress but saved in ".ppt" format). posted here is "pdf" export (both the "overheads" version followed by the "notes" version)
https://www.garlic.com/~lynn/hill0316g.pdf

I even figured out how to do a green-bar, fanfold paper background.

some foildoc trivia:


:foildoc size=20.
.*  Ignore blank lines
.dm bl /.*
.dm lb /.*

.ms on

.if &LL@FoIl = FOIL .go fld1
.cm * if foil tags not supported, default to standard gml
:gdoc
.if &@pass = 2 .go fld2
.ty *** Using default, foil tags not supported
.dm xfoil /.pa /
.aa foil xfoil
.go fld2

...fld1
.df stxt type ('gothic' 9 medium normal) codepage t1d0base
.dm stxt(1) /.bf stxt = /
.dm etxt(1) /.pf /

...fld2
.dm stxt(4) /.fo on/
.dm etxt(4) /.cm * /

:titlep.
:title stitle='VM Performance History (86.10 SEAS)'
:title.VM Performance History
:title.86.10 SEAS
:docnum.VMP.DD.003
:date.Oct. 7, 1986
:author.Lynn Wheeler
:address.
:aline.K83/801
:aline.IBM Almaden Research
:aline.1-408-927-2680
:aline.Wheeler@ALMVMA
:eaddress.
:etitlep.

... snip ...

foildoc script (announce and description)


:frontm.
:titlep.
:title.GML for Foils
:date.August 24, 1984
:author.GMB
:author.MER
:author.RPO
:author.MHK
:address.
:aline.T.J. Watson Research Center
:aline.P.O. Box 218
:aline.Yorktown Heights, New York
:aline.&rbl.
:aline.San Jose Research Lab
:aline.5600 Cottle Road
:aline.San Jose, California
:eaddress.
:etitlep.

... snip ...

from ibm jargon:
foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of transparent plastic material used for overhead projection of illustrations (visual aids). Only the term Foil is widely used in IBM. It is the most popular of the three presentation media (slides, foils, and flipcharts) except at Corporate HQ, where even in the 1980s flipcharts are favoured. In Poughkeepsie, social status is gained by owning one of the new, very compact, and very expensive foil projectors that make it easier to hold meetings almost anywhere and at any time. The origins of this word have been obscured by the use of lower case. The original usage was FOIL which, of course, was an acronym. Further research has discovered that the acronym originally stood for Foil Over Incandescent Light. This therefore seems to be IBM's first attempt at a recursive language.

... snip ...

there was folklore that there was full-time department in armonk that specialized in turning presentations into flipcharts (flipcharts were the required form for presentation to armonk executives).

--
virtualization experience starting Jan1968, online at home since Mar1970

End of an era

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: End of an era
Newsgroups: alt.folklore.computers
Date: Sat, 19 Mar 2011 13:41:40 -0400
jmfbahciv <See.above@aol.com> writes:
<grin> I'm still flabberghasted when I watched CSPAN's broadcast of the hearings having to do with the internet and cable, IIRC. In one sentence, she uttered phrases which were both for and against the issue. The picture of her in mind after that is two word drools falling out of the corners of her mouth.

CSPAN yesterday broadcast hearing on gov. oil field licenses ... I came in part way through ... but the summary was in effect, somebody forgot (&/or failed to provide appropriations) to audit the oil companies and failed to collect huge billions in fees from the oil companies pumping oil under those licenses (fees were to be proportional to the price oil). There was some statement that there was also no legal authority to go back and collect the fees retroactively. There was other references that even tho the huge billions in uncollected fees translate into oil company profits ... that other (congressional) loopholes ... don't even have corporates taxes being paid on the unearned profit (although most of this occurred under the auspicies of congress during the preceeding two decades).

the person from the fed. agency being questioned ... did point out that when congress does appropriate money for agency audits of public oil field licenses ... they avg. $4 recovered for every $1 spent (aka around 400% ROI).

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Sat, 19 Mar 2011 23:45:16 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
from long ago and far away, earlier this week I (re-)gave a user group oct86 presentation on vm performance history (with a couple additional background history comments ... many of which could be familiar from old email I've posted here over the past several years). the original was in gml "foildoc" ... converted to powerpoint (actually open office impress but saved in ".ppt" format). posted here is "pdf" export (both the "overheads" version followed by the "notes" version)
https://www.garlic.com/~lynn/hill0316g.pdf


re:
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory

while putting together presentation ... also doing hsdt/nsf stuff

Date: 09/15/86 11:59:48
From: wheeler
To: somebody in paris

re: hsdt; another round of meetings with head of the national science foundation ... funding of $20m for HSDT as experimental communications (although it would be used to support inter-super-computer center links). NSF says they will go ahead and fund. They will attempt to work with DOE and turn this into federal government inter-agency network standard (and get the extra funding).


... snip ... top of post, old email index, NSFNET email

above before some of the internal politics really took hold (shutting down stuff with outside organizations). other past email mentioning stuff for NSF using HSDT
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
and
https://www.garlic.com/~lynn/lhwemail.html#hsdt

some related discussion in this (linkedin) Greater IBM thread:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

the (above/later) "NSFNET backbone" rfp was for $11.2m. Announce 28Mar1986:
https://www.garlic.com/~lynn/2002k.html#12

from the posted program announcement:
A total of up to 40 awards are planned for the two years 1986 and 1987. Support for this program is contingent on the availability of funds. This announcement does not obligate the NSF to make any awards if such funding is not available.

... snip ...

also from above:
NSFnet will be built as an Internet, or "network of networks", rather than as a separate, new network.

... snip ...

the final (major) award didn't exactly turn out like the original program announcement ... and there was never any funding allowed for HSDT. misc. past post mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

besides still doing vm performance stuff ... and the presentation on vm performance history (user group SEAS meeting was held on isle of Jersey), the same presentation was also scheduled for VM specialist meeting in UK (following is 2oct86)

Date: 02/10/86 17:26:55
To: speaker distribution
Subject: VM Specialist event 13th Oct.

Hi, The venue for the event has been set at CROYDON. Please make sure you contact xxxx5 or xxxx6 while you are in Jersey to get details of how to get there. (Earlier location had to be changed due to number of attendess).

The agenda looks like this...


09.30  Introduction
09.45  History of VM Performance        -Lynn Wheeler     (Almaden)
10.45  Coffee
11.00  CMS Update                       -xxxx1            (Endicott)
12.00  Lunch
13.00  Advanced Function Printing       -xxxx2            (Almaden)
13.45  PC and VM Cooperative Processing -xxxx3            (Almaden)
14.30  Tea
14.45  MVS Recovery under VM/XA SF      -xxxx4            (Jo'burg)
15.45  Announcements & SEAS report      -xxxx5 & xxxx6    (B'stoke)
16.30  Open Forum
17.00  Close

See you on 13th...


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.usage.english, alt.folklore.computers
Date: Sun, 20 Mar 2011 09:03:45 -0400
tony cooper <tony_cooper213@earthlink.net> writes:
The current "big deal" is a button on the dashboard used to start the car. My second automobile ('41 Ford), back in the 50s, had push-button start. My first automobile ('38 Chevvy) had a starter button on the floor.

old posts learning to drive '38 chevy flatbed truck (with picture) when I was eight (although it wasn't until I was 11 before I was allowed to drive loaded 2.5 ton truck on highway):
https://www.garlic.com/~lynn/2002i.html#59 wrt code first, document later
https://www.garlic.com/~lynn/2004c.html#41 If there had been no MS-DOS
https://www.garlic.com/~lynn/2007h.html#19 Working while young
https://www.garlic.com/~lynn/2007n.html#39 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2011.html#13 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows

post with URL for '38 chevy shop manual for the starter
https://www.garlic.com/~lynn/2010k.html#44 Just wondering what precisely happened to this newsgroup

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.usage.english, alt.folklore.computers
Date: Sun, 20 Mar 2011 09:31:34 -0400
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
It's like those key fobs that some oil companies give you to wave in front of the pump. Again, how is that so much easier than sticking your credit card into the slot? There is one definite advantage to that one, though: your fob won't work with any competitor's pump. But that's an advantage for the oil company - not for you.

associations are starting to do "wave" RFID chips ... which are more general. These are the "EPC" RFID chips ... i.e. developed for inventory & checkout barcode (aka "EPC" follow-on to UPC) ... except the magstripe information is encoded. There are whole generation of EPC RFID chips effectively encoding magstripe info in one of the chips. Whether or not they are single merchant or multiple merchant (card association) is analogous to standard plastic magstripe cards being general card association or merchant specific gift/stored-value cards.

The "EPC" RFID chips are basically static read (their barcode heritage) ... also effectively very much like magstripe ... but being able to "read" w/o actually having contact. There are numerous reports about "skimmers" for such chips ... again analogous to physical devices overlayed on atm cash machines to skim magstripe ... but being able to do it at tens of feet ... potentially do whole crowds at subway platforms.

The static "EPC" RFID chips are different from RFID "contactless" chips that were originally designed for transactions ... ISO14443 .. contactless memory chips ... data is read & written ... but using encrypted protocols tied to something in the chip (making it harder to counterfeit) and contactless smartcard chips ... actually does processing.

In the 90s, I was asked if I could do a iso14443 contactless smartcard chip ... that used sophisticated crypto and operated within transit gate elapsed time and power limitations (i.e. chips not only doing radio frequency communication but power driving chips is coming from same radio frequncy). transit gate iso14443 has operation within tenth of second and operate at 10centimeters (i.e. gets sufficient power at 10centimeters and do everything within .1 second).

There were some smartcard chips at the time being able to do equivalent security/crypto transaction ... but required both enormous power (needed contacts ... couldn't do it within iso14443 power profile) and tens of seconds elapsed time.

I mentioned in the past giving talk on the chip at the intel developers forum in the trusted computing track a decade ago ... the IDF 2001 pages have gone 404 ... but live on at the way back machine:
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

also doing walk through of (certified) security chip fab in dresden
https://www.garlic.com/~lynn/2003j.html#63 Dealing with complexity
https://www.garlic.com/~lynn/2003k.html#53 Getting old
https://www.garlic.com/~lynn/2006l.html#57 DEC's Hudson fab
https://www.garlic.com/~lynn/2007h.html#59 ANN: Microsoft goes Open Source
https://www.garlic.com/~lynn/2008b.html#13 Education ranking
https://www.garlic.com/~lynn/2008e.html#62 Any benefit to programming a RISC processor by hand?
https://www.garlic.com/~lynn/2008o.html#78 Who murdered the financial system?
https://www.garlic.com/~lynn/2008o.html#80 Can we blame one person for the financial meltdown?
https://www.garlic.com/~lynn/2009b.html#30 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#35 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2010f.html#83 Notes on two presentations by Gordon Bell ca. 1998

past posts mentioning IDF 2001 talk:
https://www.garlic.com/~lynn/2001c.html#20 Something wrong with "re-inventing the wheel".?
https://www.garlic.com/~lynn/2005g.html#36 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2007g.html#61 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#63 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007m.html#20 Patents, Copyrights, Profits, Flex and Hercules
https://www.garlic.com/~lynn/2007v.html#37 Apple files patent for WGA-style anti-piracy tech
https://www.garlic.com/~lynn/2008e.html#76 independent appraisers
https://www.garlic.com/~lynn/2008f.html#31 confluence of virtualization and trusted computing
https://www.garlic.com/~lynn/2008f.html#35 confluence of virtualization and trusted computing
https://www.garlic.com/~lynn/2009j.html#58 Price Tag for End-to-End Encryption: $4.8 Billion, Mercator Says
https://www.garlic.com/~lynn/2009k.html#5 Moving to the Net: Encrypted Execution for User Code on a Hosting Site
https://www.garlic.com/~lynn/2009l.html#61 Hacker charges also an indictment on PCI, expert says
https://www.garlic.com/~lynn/2009m.html#48 Hacker charges also an indictment on PCI, expert says
https://www.garlic.com/~lynn/2009p.html#59 MasPar compiler and simulator
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010d.html#34 "Unhackable" Infineon Chip Physically Cracked
https://www.garlic.com/~lynn/2010d.html#38 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#63 LPARs: More or Less?
https://www.garlic.com/~lynn/2010f.html#74 Is Security a Curse for the Cloud Computing Industry?
https://www.garlic.com/~lynn/2010g.html#9 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#53 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010o.html#50 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010o.html#84 CARD AUTHENTICATION TECHNOLOGY - Embedded keypad on Card - Is this the future
https://www.garlic.com/~lynn/2010p.html#72 Orientation - does group input (or groups of data) make better decisions than one person can?
https://www.garlic.com/~lynn/2010p.html#73 From OODA to AAADA
https://www.garlic.com/~lynn/2011b.html#11 Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
https://www.garlic.com/~lynn/2011c.html#59 RISCversus CISC

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 21 Mar 2011 00:08:30 -0400
re:
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory

Date: FRI, 02/20/87 10:21:36 PST
From: wheeler

re: friday; ... as the old saying goes, "vote early and often"

btw, next week starts the west coast vm computer meetings, VM workshop is being held @ Asilomar, followed by SHARE in SanFran, followed by the IBM VMITE, followed by GUIDE in LA.

Following is the tentative wrkshp schedule ... I'm giving two talks, one on history of VM Performance (previously given at SEAS and easily ran 3-4 hrs). Network Research (previously given at Baybunch and numerous other places) ... I may also be participating in BOFs on debugging (DUMPRX) and spooling (HSDTSFS).


... snip ... top of post, old email index, NSFNET email

from vmshare:
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMWK:87A&ft=MEMO

from above:
The Asilomar VM Workshop of 1987 will be held February 23-27 at Asilomar State Park on Monterey Bay in California. Registration will be all day Monday, Feb. 23rd, with the sessions being held on the 24th through the 26th. The setup of the workshop will be as in the past. Dormitory rooms will be available.

... snip ...

an (agenda at bottom):
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMWKABSA&ft=MEMO

misc past posts mentioning dumprx (problem analysis implemented in rexx)
https://www.garlic.com/~lynn/submain.html#dumprx

misc. past posts mentioning hsdt:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

recent post discussing hsdt "sfs" in (linkedin) Greater IBM thread:
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

also includes this old email ... referencing converting the internal network to sna/vtam
https://www.garlic.com/~lynn/2011.html#email870306

the post also references lots of mis-information regarding sna/vtam applicability for the internal network as well as the nsfnet backbone. misc. other recent posts mentioning sna/vtam misinformation:
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#16 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#92 A History of VM Performance
https://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011d.html#58 IBM and the Computer Revolution

misc. posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 21 Mar 2011 10:25:26 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the "TSS-Style" thing was called "RASP". Simpson later leaves and appears as a "Fellow" in Dallas working for Amdahl ... and redoing "RASP" (in "clean room"). There was some legal action that attempted to find any RASP code included in the new stuff being done at Amdahl.

re:
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

old email referencing RASP ... from long ago and far away:

Date: 04/08/81 08:42:17
From: wheeler

got my hands full all day today and tonight between here, stl, & cub scouts. Another bet might be TSS, especially with the stripped down PRPQ they did for UNIX interface. RASP may just be IH (opposite of NIH) for him. I haven't heard of RASP being used for anything but demos (in some ways put it on par with VMTOOL as non-production system so far -- have you heard how much problem STL is having with both hardware & software? -- of course RASP has had higher quality people working on it).


... snip ... top of post, old email index

a little later ...

Date: 09/07/82 14:42:21
From: wheeler

talked to somebody about Amdahl RASP. IBM has had quite a large attrition rate in the RASP group ... a large portion going to Amdahl. Apparently IBM legal is gearing up to sue as soon as Amdahl announces. Comment is that big IBM legal talent is going over every line of IBM RASP code & applying for PATENTS on every possible thing they can come up with.


... snip ... top of post, old email index

later followup
Date: MON, 03/02/87 10:25:35 PST
From: wheeler

re: Amdahl; implication was that it was something similar to RASP but couldn't talk about it. There was a oblique comment regarding IBM suing Amdahl over RASP and that in detailed comparison of the code, only one small section was even remotely similar and that got recoded.

The group is looking for people but are avoiding approaching any ibm'ers (although I've heard from various IBM people about contacts from other Amdahl areas, Amdahl appears to be offering $$ in excess of the IBM going rate for VM system programmers). The Simpson group is even kept isolated from other Amdahl people which are working in VM and/or other IBM related software areas.


... snip ... top of post, old email index

past posts mentioning RASP
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
https://www.garlic.com/~lynn/2006w.html#24 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#28 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 21 Mar 2011 10:54:28 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
In the performance arena, I worked on several areas, a) generalized path length reduction, b) fastpath - specialized paths for most frequently encountered cases, c) control data structures that would minimize CPU overhead, d) identifying closed CP/67 subroutines and modifying them to use pre-allocated savearea in page 0, and changing their callers to use BALR rather than SVC, e) improving the page replacement algorithm to use reference bits & global LRU (rather than FIFO), f) implementing feedback/feedfoward controls in decision making. The dispatcher changes implemented code that implicitly took advantage of which possible virtual machines might require status updates. CPEXBLOKs were also placed on a master chain instead of being chained off the UTABLE. Finally an explicit in-q chain was created

re:
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory

as highlighted in the SEAS performance history presentation ... there was significant problems introduced by "enhancements" in the HPO2.5-HPO3.4 period which was taking awhile to clean up (reverything to a "clean" global LRU just being one)
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory

from long ago and far away

Date: FRI, 03/20/87 09:27:21 PST
From: wheeler

re: big pages; for additional background see vmperf script on vmpcd. Another set of fixes that went into hpo5 is for >16meg support (on vmpcd also see >16meg forum and also section in vmperf script).

re: >16meg; when pok originally contacted me about >16meg hardware, I was told that the development group plan's support was to perform "bring-downs" by writing to DASD and then bringing back in. I suggested an alternative approach based on the fact that CP (almost) never operates directly on any data in virtual storage, but alwas copies it first to a field in some CP control block. I provided a subroutine that would be placed in DMKPSA (somewhat similar to the fetch protect check subroutine already in DMKPSA) that would do the copying, if necessary it would fix up dummy page table, change CR0/CR1, enter supervisor state, translate mode and use an MVCL to copy the necessary data.

The development group decided to stick with their originally plan, but substituted my subroutine for DASD write/read. That got them into the <16meg constraint they are today. HPO5 contains clean-up of the page replacement algorithm per VMPERF SCRIPT along with minor support for limited copying rather than bring-down (priv. instruction copying and one or two others).

re: big pages; big pages represent a performance "benefit" from the stand-point that more data is moved per operation, this somewhat optimizes CPU overhead and DASD access time (3380 seek/rotation-delay is done once per group of pages). On the other hand it represents a performance "cost" in terms of channel capacity and real storage utilization to transfer pages that wouldn't otherwise have been required at that point in time. The "benefit"/"cost" trade-off determines whether big pages help or hinder. Prior to HPO5 there was also a "cost" associated with the "big page" code would do a significantly poorer job of managing real pages associated with it.

A trivial example is customer running 3081, hpo4.2, and STC electronic drum. With the STC drum allocated as SWAP, the system ran at 70% cpu utilization, changing the STC drum to PAGE, the system ran at 100% cpu utilization (with essentially the same ratio of prob/supervisor). The 3081 CPU advantage of block reading 400-600 pages a second was rather negligible (paging CPU overhead is rather quite small in VM compared to other operations ... although quite large compared to what it use to be in cp/67). There was negligible "benefit" associated with DASD access since the STC drum had no seek and/or rotational delay. The big difference was in the "overhead/cost" associated with 3.4/4.2 big pages vis-a-vis small pages (a cost differential that will be substantially lower in 5.0).


... snip ... top of post, old email index

A big part of the HPO5 was better alignment of the composition of the "big pages" and alignment with global LRU replacement (as well as making treatment of pages above & below the 16mbyte-line more uniform).

misc past posts mentioning "big pages":
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#4 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#13 virtual memory
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006v.html#43 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#9 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007o.html#32 reading erased bits
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008k.html#80 How to calculate effective page fault service time?
https://www.garlic.com/~lynn/2010g.html#23 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#42 Interesting presentation
https://www.garlic.com/~lynn/2010g.html#72 Interesting presentation

--
virtualization experience starting Jan1968, online at home since Mar1970

Intepreted Languages

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intepreted Languages
Newsgroups: alt.usage.english, alt.folklore.computers
Date: Mon, 21 Mar 2011 12:42:49 -0400
scott@slp53.sl.home (Scott Lurndal) writes:
Java is compiled into an intermediate "instruction set" called bytecode.

A bytecode interpreter (which may be been implemented in hardware) executes the resulting bytecode. The Oracle (nee Sun) hotspot JVM compiles the bytecode into machine code on the fly.

Python and Perl are interpeted.


aka JIT ... just in time ... compiler technology (dynamic translation)
https://en.wikipedia.org/wiki/Just-in-time_compilation

from above:
JIT builds upon two earlier ideas in run-time environments: bytecode compilation and dynamic compilation. It converts code at runtime prior to executing it natively, for example bytecode into native machine code.

Several modern runtime environments, such as Microsoft's .NET Framework and most implementations of Java, rely on JIT compilation for high-speed code execution.


... snip ...

back in the 1980 time-frame ... when company was going to replace large variety of internal & embedded microprocessors with 801/risc (Iliad chips) ... including entry/low range and mid-range 370s engines ... there was look at doing JIT for 370 to 801 (as part of 370 emulation).

more recently, some of the commercial 370 simulation offerings (running on intel and other processors) had implemented JIT.

past posts mentioning 801, risc, iliad, romp, rios, power, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Mon, 21 Mar 2011 13:03:33 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
from vmshare:
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMWK:87A&ft=MEMO

from above:

The Asilomar VM Workshop of 1987 will be held February 23-27 at Asilomar State Park on Monterey Bay in California. Registration will be all day Monday, Feb. 23rd, with the sessions being held on the 24th through the 26th. The setup of the workshop will be as in the past. Dormitory rooms will be available.


re:
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

this is post about some of the HSDT performance issues
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

this is in response to email that both VM Performance History and HSDT/NSFNET talks had been accepted ... suggesting that they might also be interested on spool-file rewrite for HSDT RSCS operation.

Date: WED, 02/04/87 15:28:52 PST
From: wheeler
To: 87 VM workshop organizer

re: hsdt-sfs; I also have a talk on proto-type hsdt spool file system that I'll be giving at the ibm vmite two weeks later (week 3/10) ... but i'm not sure i could get clearance to give that talk in the time left.

proto-type hsdt-sfs is implemented in pascal/vs extended and operates in a virtual machine. data records are format compatible with the existing vm spool system but with several added bells and whistles. spool file checkpointing is totally eliminated (both the performance overhead of doing it and the start-up overhead ... even much faster than the announced hpo 5 support start-up enhancements) by adding a slight amount of data to each record which essentially makes each record self-describing and making sure that all i/o is performed in a consistent manner. all single points of failure are eliminated and data in the spool file area is recoverable if it can be read (virtual machine, cp, and/or hardware can have catastrophic failures at any point and hsdt-sfs system is recoverable) since data & control information is self-describing and written consistently. As such, the hsdt-sfs has a much higher reliability than the existing cp spool system.

Contiguous allocation and imbedding index blocks in the file are supported for increased performance via multi-block read/write i/o. there are no limitations on number of spool files, either in the system or per userid. in-core ssbloks (abbreviated sfbloks, approx. 50 bytes) are currently chained from a userid specific chain and a master system chain. The master system chain will shortly be replaced red/black tree. userid specific anchors are hung off a hash table and contain userid specific summary information such as total number of files and total number of 4k blocks allocated. Files &/or file information can be found either by userid hash and/or thru the red/black tree.

The abbreviated ssbloks require less virtual memory and the associated virtual pages can either reside in >16meg and/or be paged out.

An offshot of the hsdt-sfs technology are pascal/vs application programs that can read a pid cp spool checkpoint area & spool disks. One such application supports importing pid cp spool files into hsdt-sfs. Another application will simulate the cp SPTAPE DUMP command but with more function and better performance.


... snip ... top of post, old email index, HSDT email

one of the pathlength improvements in HSDT-SFS was the native CP implementation had a sequential chain ... and for large systems could have 10K elements (overhead issue is analogous to CP/67 kernel storage management before subpools). HSDT posts

https://www.garlic.com/~lynn/subnetwork.html#hsdt

trivia topic drift ... mainframe tcp/ip was also implemented in pascal/vs ... and while it had some thruput/performance issues ... I had done the changes to support RFC1044 and in some testing at Cray Research was able to get sustained channel speed thruput between 4341 and Cray ... using only modest amount of the 4341 (possibly 500 times improvement in the instructions executed per byte moved) ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

vm370 running in "XA-mode"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: vm370 running in "XA-mode"
Newsgroups: alt.folklore.computers
Date: Tue, 22 Mar 2011 07:06:29 -0400
from long ago and far away:

Date: FRI, 05/08/87 10:58:36 PDT
From: wheeler

re: 9370/endicott; there was apparently a number of things being reviewed for "killing" yesterday. Endicott had a project with several people to modify vm/370 to run on 9370s in xa-mode. That project was apparently killed yesterday.

There has been a project to run VM/370 in xa-mode for over 10 years that has started of POK. It was originally never intended to be a product ... so they checkpointed a copy of 1975 version of vm/370 and went off and started modifying it w/o bothering to track anything happening in the official product. A version of that was finally announced in 83/84, primarily for doing mvs/xa testing (it would provide virtual machines ... but most of the rest of the vm product features developed over the past 12 years are missing from it).

Endicott had been planning on coming out with an xa-mode VM system for interactive cms environment (rather than crude mvs/xa testing) that was compatible with the existing product. Such a system would provide 31-bit virtual addressing and allow the 9370 to compete in more markets with VAX (which provides something like 28-bit virtual addressing ... i.e. around 256mbyte). They had the modifications up and running and were planning on announcing/shipping very shortly.

That got canceled. In part because Kingston (the organization that is now responsible for this other thing) were claiming that it would make them look bad. Endicott have 6-10 people working on compatible xa-mode vm system for the past 8 months, Kingston has had over 300 people working on this other thing for the past 2-3 years trying to eliminate all the inconsistancies between the vm 370 product and their xa-thing ... and (according to their plan) they still have a couple years to go before they are finished (although they periodically make new releases with the changes they have done to date).


... snip ... top of post, old email index

recent posting in (linkedin) Mainframe Experts group thread ("At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened") in response to statement that 9370 had been targeted as a "vax-killer"
https://www.garlic.com/~lynn/2011e.html#15

above mentions POK "dirty-tricks" dealing with (Endicott) 4341 competition for high-end 370s. other posts in the same thread:
https://www.garlic.com/~lynn/2011e.html#16
https://www.garlic.com/~lynn/2011e.html#17
https://www.garlic.com/~lynn/2011e.html#19

older email referencing enormous amount of resources that had been going into the "migration thing" (out of kingston, at the time in POK, also mentions that there was limited microcode space in 3081, so "SIE" instruction microcode has to be "paged"):
https://www.garlic.com/~lynn/2011b.html#email810210

old email referencing somebody at internal datacenter having standard VM up&running production in "XA" mode ... Endicott later has offer to this person ... early basis for 9370 "XA"
https://www.garlic.com/~lynn/2011c.html#email860122
https://www.garlic.com/~lynn/2011c.html#email860123

trivia topic drift ... senior executive killing XA for 9370, later retires in oct91 ... a recent reference (in linkedin Greater IBM post):
https://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past

in the wake of the death of FS project ... misc. past posts
https://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to get hardware & software items back into 370 product pipeline. part of that was doing (Q&D) 303x in parallel with 370/xa. Early in 370/xa plan, POK manages to convince corporate to kill vm/370 product, shutdown the vm/370 development group and transfer all the people to POK to support mvs/xa development (including virtual machine mvs/xa development tool, never intended for product release). Endicott manages to save the vm370 product mission, but has to reconstitute a development group from scratch.

--
virtualization experience starting Jan1968, online at home since Mar1970

"Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 22 Mar 2011 05:04:39 -0700
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
SSA? It's obsoleted by FC and SAS. Even IBM don't use SSA anymore. (sorry couldn't resist) ;-)

old post about 9333/Harrier (serial copper) turning into SSA. I had been working on getting them to turn into FCS (fiber channel standard) compatibility (at 1/8th or 1/4th 1gbit FCS standard, aka switch with ports for serial fiber and serial copper) ... but instead they decided to do something that was non-interoperable with anything else.
https://www.garlic.com/~lynn/95.html#13 ssa, grump

of course some of the mainframe channel people then start showing up at FCS standards meetings and doing unnatural things to the standard to come up with FICON.

recent related posts in both (linked) "IBM Alumni" & "Greater IBM" groups ("IBM Watson's Ancestors: A Look at Supercomputers of the Past"):
https://www.garlic.com/~lynn/2011d.html#7
https://www.garlic.com/~lynn/2011d.html#24
https://www.garlic.com/~lynn/2011d.html#29
https://www.garlic.com/~lynn/2011d.html#40

for other drift ... recent thread in (linkedin) Mainframe group about Oracle on zLinux under zVM having bad thruput on "CKD" disks (before moving to non-CKD) ...
http://lnkd.in/ajGuA2

"CKD" should have been junked several decades ago. misc past posts on the subject:
https://www.garlic.com/~lynn/submain.html#dasd

(sorry couldn't resist) ;-)

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA/VTAM Misinformation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: SNA/VTAM Misinformation
Newsgroups: alt.folklore.computers
Date: Tue, 22 Mar 2011 08:31:52 -0400
SNA/VTAM misinformation rampant.

Early 1987, a communication group executive tripped across an analysis/comparison I did about 3725 performance the previous year ... and started sending me email that it was "invalid" (apparently because dislike for the analysis) and I had to stop distributing.

I would respond with an ever growing copy list (including large number in the communication group) that (at the time) I had shared the analysis with large number of performance experts (in the communication group) and nobody had found any fault in the analysis.

After several such notes, I would just include the previoius responses in a reply ... but further increase the distribution list.

past posts with pieces of that report/analysis
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#70

A unit of one of the baby bells had developed a SNA/NCP simulator that ran on Series/1 ... supported real networking within its infrastructure ... but spoofed NCP to host VTAM systems (claiming that the resources were cross-domain, owned by another VTAM host ... when they were actually "owned" by the network). The comparison should that the Series/1 had enormous functionality, cost and thruput advantages compared to (real) 3725/NCP infrastructure.

HONE was the virtual machine based (originally cp67 but moved to vm370) online worldwide sales & marketing support system. "configurators" were implemented in APL from information provided by product groups ... provided salesman necessary information for creating a hardware order specifying all the necessary features (based on customer requirements, including thruput and performance). misc. past posts mentioning HONE (&/or APL)
https://www.garlic.com/~lynn/subtopic.html#hone

The Series/1 configuration was based on actual installations. The 3725/NCP configuration was based on always taking the best possible (configuration) numbers for 3725/NCP (although the large-scale internal CCDN infrastructure wasn't actually nearly as good as that).

Date: WED, 02/18/87 13:36:54 PST
From: wheeler

... attachment 08/03/86 11:10:18 from wheeler

re: hsdt022 3725 performance;

Initial 3725 configuration presented in HSDT022 were minimum number of 3725s required to provide full interconnect of all terminals and all hosts while minimizing the number of points of failure in the 3725 configuration (especially since the hardware doesn't provide for any redundant/backup capability vis-a-vis alternative). No information about internal 3725 performance was used.

Performance number comparisons was chosen using an apple-to-apple comparison taking into account only raw data transfer time over links in a configuration where the aggregate capacity of all the links in the two configurations were approximately equal. Again, no consideration was made about possible 3725 performance. Actual end-to-end response/performance in the 3725 system would include data transfer time between 3278 head and a 3725, various internal 3725s queueing & processing delays, and host mainframe queueing & processing delays.

At the very end of HSDT022, a follow-up statement was made, that based on HONE configurator information, the 3725 internal performance wasn't adequate to actually support the average traffic required at 75% loading (or less ... allowing for normal interactive traffic peak-to-nominal load variability). Those last couple foils are the only ones where any 3725 internal performance characteristics are taken into account. The other configuration comparisons are made purely on the basis of availability, connectivity, and raw data transport information.

The last couple of foils on actual 3725 performance singificantly distort the apples-to-apples comparison which has the aggregate raw data transport in the 3725 and alternative configuration nearly the same. Increasing the number of 3725s by 47% also increases the number of inter-3725 lines by the same amount which increases the aggregate inter-3725 link data transport capacity to nearly 44mbits.

Various bits & pieces of information in other HSDT files also look at other possible apples-to-oranges 3725 configuration variations. Given a week long tutorial, it would be possible to investigate a large number of possible 3725 configuration alternatives.

There are a couple of possilbe alternative directions which might possibly improve some of the end-to-end 3725 performance but usually also signficantly increase the number of 3725s, failure points, and/or host mainframe availability dependencies. One possible direction for alternatives is adding 3725 inter-node concentrators for handling traffic between sites. Another possible direction for alternatives is for local host to handle some of the intra-node traffic (since local host SSCP is already a failure point anyway, could go ahead and route traffic thru the host and out over a 3088 to another host).

Looking at the one of the inter-node 3725 concentrator variations (somewhat analogous to function performed by HYPERchannel A715), based on CCDN performance information, it would appear to require four 3725s to perform the function of an A715 (i.e. managing all traffic over a T1 link, plus a multiplexor to split a T1 link into four channels). The number of 3725s increase significantly as do internal 3725 processing delays since inter-node traffic now requires handling by four 3725s (one the 3724 is attached to, local 3725 concentrator, remote 3725 concentrator, remote host 3725). It also increases the points of failures both in terms of 3725s and required host mainframe SSCP availability.


... snip ... top of post, old email index, HSDT email

and ...

Date: THU, 02/19/87 08:53:34 PST
From: wheeler
To: original technical reviewer in communication group

re: fud; starting to pick-up, got a nastygram from xxxxxxx in yyyyyy saying everything in hsdt022 is wrong and that all distribution on the subject must be stopped. i sent him copies of interactions with yyyyyy people from last summer on the subject ... also told him that the hsdt documents are generally available to anybody that wanted a copy on the Raleigh NETWORK tools disk.

suggested that he might want to double check facts with his own people ... be interesting to see what the next step is.


... snip ... top of post, old email index, HSDT email

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

recent referencs to sna/vtam misinformation
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

within year or so of above, senior disk engineer got talk scheduled at the annual, world-wide, internal communication group conference ... and opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The communication group had strangle-hold on the datacenter and lots of data was starting to leak out to more distributed computing friendly platforms. The disk division had come up with a number of products to correct the situation ... but since the communication group "owned" strategy for everything that crossed datacenter walls ... they were able to always get them shutout.

misc. past posts mentioning communication group and terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

The real cost of outsourcing (and offshoring)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 22 Mar, 2011
Subject: The real cost of outsourcing (and offshoring)
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2011c.html#56

... and:
http://lnkd.in/6Kefvg

The Pentagon Labyrinth: 10 Short Essays To Help You Through It (panel discussion on the book rebroadcast over the weekend on CSPAN)
http://www.booktv.org/Program/12319/The+Pentagon+Labyrinth+10+Short+Essays+to+Help+You+Through+It.aspx

Part of the theme is that the MICC seems to be working towards the economic collapse of the Pentagon.

An item pointed out was organizing defense programs so that every major voting district got a piece (to get as many congressmen locked into the programs as possible). One of the big downsides was bringing all the piecemeal parts together for assembly and they not fitting. Several programs were cited as especially bad ... but it was also highlighted as major problem with the Boeing 787 program.

Several of the Labyrinth authors are Boyd "acolytes" ... I had sponsored Boyd's briefings at IBM ... see references in "Boeing Plant 2 ... End of an Era"
http://lnkd.in/ku-thX

briefings on effective operation in competitive environment ... Boyd wiki:
https://en.wikipedia.org/wiki/John_Boyd_%28military_strategist%29

more detail background by one of the Labyrinth authors:

Why Boeing Is Imploding
http://chuckspinney.blogspot.com/2011/02/why-boeing-is-imploding.html

past posts mentioning "Boeing Plant 2":
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011b.html#7 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
https://www.garlic.com/~lynn/2011b.html#66 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
https://www.garlic.com/~lynn/2011c.html#90 A History of VM Performance

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA/VTAM Misinformation

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SNA/VTAM Misinformation
Newsgroups: alt.folklore.computers
Date: Tue, 22 Mar 2011 12:25:36 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
At one point in a previous life I was looking for something exactly like this. Obviously IBM wasn't interested in providing this functionality.

re:
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation

I had planned to release it as IBM product ... with all the development and product release funding from a major customer (effectively free/gratis) ... who would totally recoup the cost in approx 9months (assuming it was shipped/released/supported as standard product) ... external funding as a work-around to internal politics ... followed with rapid transition from Series/1 to 801/RIOS platform.

As mentioned in previous posts ... the eventual internal politics can only be described as truth is more bizarre than any fiction (in part because I would be doing it w/o any corporate headcount or funding) ... much more bizarre than the reference to senior disk engineer's statements about communication group would be responsible for the demise of the disk division (because its stranglehold on the datacenter).

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 22 Mar 2011 10:19:35 -0700
BillF@MAINSTAR.COM (Bill Fairchild) writes:
It was. ECKD was announced in the early- to mid-80s, which was 25+ years ago, which is several decades ago. Not all users respond quickly.

re:
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"

reference was to "CKD" disks didn't change ... controllers added some extra stuff (eckd) ... originally for "Calypso" ... the 3880 controller speed-matching buffer ... allowing 3380/3880 3mbyte connenction to (370/168 2880) 1.5mbyte channels ... which had enormous problems (that wouldn't/don't exist w/FBA).

old email discussing calypso (eckd) and how bad the problems were (several severity ones in the field):
https://www.garlic.com/~lynn/2007e.html#email820907b

above also mentions the dismal prognosis of ever getting MVS to support FBA (I've periodically mentioned in the past about being told that even if I provided MVS with fully integrated & tested FBA support, I still needed $26M business case to cover education and pubs ... and I couldn't use lifecycle savings ... only incremental new sales).

past posts with references to calypso/eckd:
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2007e.html#40 FBA rant
https://www.garlic.com/~lynn/2007f.html#0 FBA rant
https://www.garlic.com/~lynn/2008q.html#40 TOPS-10
https://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch, sci.econ
Date: Tue, 22 Mar 2011 14:34:58 -0400
"Ken Hagan" <K.Hagan@thermoteknix.com> writes:
Umm, that wasn't "just before" they imploded the system. That was "how".

on the regulatory side ... lots of regulations were eliminated and/or not being enforced; ... some Enron tidbits somewhat kicking it off:

"Mr" did bank modernization act ... which included repeal of Glass-Steagall. Then when the head of CFTC proposed regulating commodities, "Mrs" was appointed replacement. Then "Mr" did commodities modernization act prohibiting commodity regulation (billed as loophole/favor for Enron, but also played significant role all during the past decade) ... at which time, "Mrs" resigns and joins Enron board & member of the audit committee.

"Mr" bank & commodity modernization
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html
commodity regulation proposal and "Mrs" replaces chairperson
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I
Enron "loophole" and then "Mrs" resigns and joins Enron board
http://www.nytimes.com/2008/11/17/business/17grammside.html
"Mr" & "Mrs" Enron "favor"
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

GAO appears to believe that SEC was not doing anything and starts doing reports of uptick in public company fraudulent financial filings (even after SOX). part of what they reported (nominally under sox, sec supposedly would be sending the executives to jail)
https://www.gao.gov/products/gao-06-1079sp

motivation for fraudulent filings was boost executive bonuses ... but apparently even in the case of later revised filings, bonuses weren't reclaimed/adjusted. recent quote from off the web: Enron was a dry run and it worked so well it has become institutionalized

SOX also supposedly has SEC doing something about the rating agencies, but nothing exists except a report
http://www.sec.gov/news/studies/credratingreport0103.pdf

Oct2008 Congressional hearings into rating agencies has testimony (including rating agency employees) that both sellers and the rating agencies knew the toxic CDOs weren't worth triple-A ratings ... but the sellers (unregulated loan orginators) were able to buy triple-A ratings on toxic CDOs (securitized loans & mortgages) anyway. Being able to unload every loan at triple-A eliminated unregulated loan originators any reason to care about borrowers qualificaitons or loan quality (triple-A also provided nearly unlimited source of funds for these unregulated loan originators w/o need to be a bank using deposits as source of funds).

Real estate speculators were able to use these mortgages (no-down, no-documentation, 1% interest only payment) like the Brokers' Loans that were the root of '29 stock market crash (from 30s Congressional "Pecora" hearings) ... possibly 2000% ROI in areas of the country with 20-30% real estate inflation (with speculation further fueling the inflation).

On the backside of the estimated $27T in triple-A rated toxic CDO transactions done during the period
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

were all the people getting fees, commissions and bonuses off the transactions. a huge amount of the $27T was getting warehoused off-balance by unregulated investment banking arms of too-big-to-fail (nominal regulated depository) institutions, courtesy repeal of Glass-Steagall. End of 2008, estimate that just the four largest too-big-to-fail institutions were carrying $5.2T triple-A rated toxic CDOs off-balance.
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

eventually the institutions would have to account for the toxic assets, but until the bubble burts, the individuals are raking in huge amounts; there are even reports of individuals selling & buying each others toxic assets ... churning their off-balance portfolios as further boosting their compensation. NY state comptroller reports that wall street bonuses spike over 400% during the bubble (aka big spike from the $27T in triple-A rated toxic CDOs transactions). Since then there have been lots of attempts to keep compensation from returning to pre-bubble levels.
http://www.businessweek.com/stories/2008-03-19/the-feds-too-easy-on-wall-streetbusinessweek-business-news-stock-market-and-financial-advice

There are reports that the industry tripled in size (as percent of GDP) during the bubble (easily explained by the $27T in triple-A rated toxic CDO transactions).

The situation has (at least) two parts ... the real-estate speculation bubble and crash ... analogous to the '29 stock market crash ... with the crash having all sorts of collaterial damage around the country. The other part .. where lots of the remedial attention has been focused, is the too-big-to-fail institutions (not just in the US) that were carrying enormous amounts of the triple-A rated toxic CDOs off-balance.

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: Tue, 22 Mar 2011 15:13:07 -0400
BillF@MAINSTAR.COM (Bill Fairchild) writes:
I signed in to LinkedIn and was unable to find the reference, so thanks for including the URL for the missing LinkedIn reference. I read through that reference and saw 16 comments, only one of which mentioned very bad throughput for CKD disks. No technical explanation was given for the bad throughput; i.e., was it hardware limitations in CKD, software limitations in Oracle, etc.? This comment was posted 23 days ago, ca. 28 years after IBM first announced ECKD and forward-thinking users began planning to junk their CKD by going to ECKD.

re:
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"

I'm still waiting for followup ... it says zVM, zLinux and Oracle ... so presumably it is relatively current hardware, processors, disks, software, etc. The reference is that bad performance was rectified moving off CKD to some flavor of FBA (presumably some recent flavor of ECKD, lots of users may qualify CKD/FBA ... but I can understand lots of current users not bothering to make the CKD/ECKD distinction; I can't imagine any existing "z" mainframe with "real" pre-ECKD disks)

They don't mention it as an age or legacy issue ... the zVM & zLinux aren't that old. There always is some possibility that zVM, zLinux, and/or current Oracle never bothered to optimize their (e)ckd support as well as they have FBA.

pure conjecture ... a possible motivation for not bothering with fine-tuning any (e)ckd support is that these days all (e)ckd devices are really some form of FBA device with an additional eckd simulation layer on top. Given native FBA device support ... going directly to the native FBA device eliminates an extraneous eckd simulation layer (aka for any eckd device, an equivalent native FBA device could be used w/o the additional, unnecessary eckd layer).

the ckd/eckd simulation layer continues to live on because MVS (& its descendants) have been unable to support the native devices.

misc. past posts mentioning fba, ckd, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch, sci.econ
Date: Tue, 22 Mar 2011 17:46:49 -0400
EricP <ThatWouldBeTelling@thevillage.com> writes:
Also because derivatives were totally unmonitored, because Greenspan, and others, _directly_ kiboshed the attempt to do so in 1998 by Brooksley Born head of the Commodity Futures Trading Commission (see Frontline documentary 'The Warning') AIG was allowed to sell waaaaay more Credit Defaults Swaps than they could ever cover.

re:
https://www.garlic.com/~lynn/2010e.html#36 On Protectionism

see URLs previous Mr&Mrs scenario ... where Mrs replaces Born pending Mr passing legislation to preclude regulation (as favor to Enron, but opens the way also for AIG, had previously passed bank modernization including repeal of Glass-Steagall) ... once the legislation is in place, Mrs resigns and joins Enron board and Enron audit committee.

"Mr" bank & commodity modernization
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html
commodity regulation proposal and "Mrs" replaces chairperson
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I
Enron "loophole" and then "Mrs" resigns and joins Enron board
http://www.nytimes.com/2008/11/17/business/17grammside.html
"Mr" & "Mrs" Enron "favor"
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

--
virtualization experience starting Jan1968, online at home since Mar1970

Back to architecture: Analyzing NYSE data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Back to architecture: Analyzing NYSE data
Newsgroups: comp.arch
Date: Tue, 22 Mar 2011 18:03:22 -0400
Jason Riedy <jason@lovesgoodfood.com> writes:
That reminds me of an architecture-related issue: Analyzing the mass of NYSE data both for controls and auditing.

I've seen reported that the NYSE has about 8PB of data warehoused and streams in about 1.5TB a day:
https://lwn.net/Articles/361508/
Their congressional testimony after the "flash crash" flat out admitted that they can't handle analyzing their own data volume. The trading triggers that didn't help that day all are based on simple thresholds.

A QPI or HT link pushes around 2PB/day to/from memory... Now even regulators need to be parallel computing experts. Oh, goody.

I think the NYSE has a CFP out for systems to handle their data. I'm not sure what is in that CFP, but some relevant vendors have appeared uninterested. They want to monitor their streaming data as it flows... *cough*


re:
https://www.garlic.com/~lynn/2011e.html#36 On Protectionism
https://www.garlic.com/~lynn/2011e.html#38 On Protectionism

I had done some work in x9a10 financial working group on trusted payment transaction ... there was member from NSCC. Was then asked in to do something similar for trading transactions at NSCC ... however after doing some amount of work, it was suspended with comments that side-effect of trade integrity work ... would significantly increase transparency and visibility (which apparently is an anathema to trading culture)

this is not long before NSCC merged with DTC to form DTCC. DTCC wiki
https://en.wikipedia.org/wiki/Depository_Trust_%26_Clearing_Corporation

references attempts to access DTCC transaction records in order to show illegal naked short selling
https://en.wikipedia.org/wiki/Depository_Trust_%26_Clearing_Corporation#Controversy_over_naked_short_selling

old cramer interview where he references illegal naked short selling is wide-spread ... but nobody worries because SEC can't/won't do anything:
http://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch, sci.econ
Date: Tue, 22 Mar 2011 18:27:09 -0400
EricP <ThatWouldBeTelling@thevillage.com> writes:
Mr & Mrs, but plenty of blame to go around.

FRONTLINE: THE WARNING - Part 1 of 4
http://www.youtube.com/watch?v=bXUQZP4mmwU&playnext=1&list=PL3657B19420D198C1

Eric


re:
https://www.garlic.com/~lynn/2011e.html#36 On Protectionism
https://www.garlic.com/~lynn/2011e.html#38 On Protectionism

the full time article/reference lists 25, "Mr." (URL) is only no.2
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

no. 1 was head of loan origination company

there is also wharton business school article ... originally unrestricted, now requires registration ... but also at the wayback machine:
https://web.archive.org/web/20080606084328/http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933

estimates in aggregate something like 1000 executives ... in all the different areas/aspects that went on during the bubble ... and if the gov. could figure out someway to remove them ... it would go a long way to correcting the situation.

part of the issue is that the various financial operations spend an enormous amount of money to sway regulators and legislators.

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch, sci.econ
Date: Tue, 22 Mar 2011 19:45:48 -0400
re:
https://www.garlic.com/~lynn/2011e.html#36 On Protectionism
https://www.garlic.com/~lynn/2011e.html#38 On Protectionism
https://www.garlic.com/~lynn/2011e.html#40 On Protectionism

one of the players "acquires" citi ... gets special exemption to violate Glass-Steagall ... and then gets Glass-Steagall repealed

"the wall street fix"
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/

bank modernization initially passes with simple majority, but apparently rumor that president is going to veto, a little more grease gets veto-proof with 90 senators (with such a large majority, president doesn't bother to veto)
https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act

now the rhetoric on the floor was that the primary purpose of the bill was "if you are already a bank, you get to remain a bank, if you aren't already a bank, you don't get to become a bank" (referring to "bank" as regulated depository institution ... and in the rhetoric, walmart and microsoft are mentioned as examples).

and then
http://news.muckety.com/2008/03/12/spitzer-falls-farther-and-faster-than-his-targets/1121

from above:
Sanford Weill, who had built Citigroup into a global financial titan, but whose final months as chief executive officer were overshadowed by Spitzer's probe into the relationships between equity research analysts and investment bankers during the internet boom years. Under a 2002 settlement with Wall Street banks, Citigroup paid a $400 million fine, and Weill was forbidden to communicate directly with his company's equity research analysts.

... snip ...

this is long-winded old post from early 99 (before GLBA):
https://www.garlic.com/~lynn/aepay3.htm#riskm

above mentions that in 1989, Citi realizes its ARM (adjustable rate mortgage) portfolio could take down the institution; it unloads the portfolio, gets out of the business and needs a (private) bailout to continue operating.

now role forward to this century ... the triple-A rated toxic CDOs are fundamentally (mostly) an ARM portfolio (toxic CDOs had been used during the S&L crisis to obfuscate the underlying values ... but they hadn't yet learned about being able to "buy" triple-A ratings).

come the end of 2008, and citi is the too-big-to-fail institution (of the four largest) with the largest percentage of the $5.2T in triple-A rated toxic CDOs (being held off-balance) ... and requires another bailout to continue in business (apparently the institutional knowledge from 1989 had evaporated).

TARP is suppose to handle the bailout by buying toxic assets ... but the amount appropriated would barely dent the problem ... so they look for other means of using the funds ... while the federal reserve starts buying up the off-balance toxic assets at 98cents on the dollar (which had been going for 22cents on the dollar).

Problem is that some of the troubled institutions are investment banks and not eligible for federal reserve help ... they are then given (regulated) bank charters ... so they can also get federal reserve help. However, this should have been precluded by the GLBA legislation.

GLBA does a couple other things ... including "opt-out" consumer privacy (consumer has to go on record not wanting their personal information shared). At the time, Cal. was in the process of passing "opt-in" consumer privacy (information can only be shared when specifically authorized by consumer, GLBA is federal "pre-emption"). Middle of last decade, there is annual privacy conference in wash dc. One of the sessions is panel discussion with the FTC commissioners. During the session, someone gets up and asked them if they are going to do anything about "opt-out" consumer privacy (person says that he works on the major call-center operations and "knows" than none of 1-800 "opt-out" lines are given any method for recording who is calling).

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiple Virtual Memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple Virtual Memory
Newsgroups: alt.folklore.computers
Date: Tue, 22 Mar 2011 23:14:17 -0400
re:
https://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory

for some IPO RSCS "humor":

Date: 24 June 1987, 13:00:18 PDT
Subject: ipo timers

The IPO RSCS timer code uses fields in the task control block to control the timer, so you only get one per task. I guess you'll have to set up your own timer queue (simplified since it only has two elements) within the line driver to get more than one timer.

I was talking to xxxxx about the IPO timer code a while ago and he thought that the implementation violates a basic RSCS design principle. The VM development group decided to use a similar approach in 1975, in spite of attempts by xxxxx and yyyyy to persuade them otherwise. xxxxx and yyyyy felt strongly enough about it that they responded by abandoning the product version of RSCS as their VNET development base. Later, when VNET was released as the RSCS Networking program, the bad timer code in the SCP RSCS was stabilized out of existence. xxxxx suspects that the motivation for messing up the RSCS design by putting in the IPO timer code was the same as what makes kids decorate wet cement.


... snip ... top of post, old email index, HSDT email

The IPO RSCS FDX driver had special y-connector cable to take 56kbit full-duplex into two separate ports/addresses ... one dedicated for read and one dedicated for write.

I needed to use the FDX driver for T1/1.5mbit/sec (and faster) full-duplex ... both terrestrial and satellite. To handle satellite delay ... I needed to do rate-based pacing (as opposed to "window" pacing paradigm) ... using timer services that were consistent with the original rscs/vnet implementors.

Later, I was on the XTP technical advisery board ... past posts mentioning XTP (we also took it to ANSI x3s3.3 trying for HSP standardization)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

and I did the following write-up/specification for xtp rate-based pacing
https://www.garlic.com/~lynn/xtprate.html

XTP wiki
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol

above claims that XTP does not employ congestion avoidance ... but I managed congestion avoidance with (adaptive) rate-based pacing.

I've periodically claimed that tcp/ip slow-start and sliding window for congestion control ... was because many of the platforms from the period lacked timer facilities adequate for implementing rate-based operation. Approximate same time slow-start was presented at IETF meeting, ACM SIGCOMM had paper how slow-start & windowing was non-stable in large multi-hop network. One "failure" mode was that return ACKs (in window paradigm) tended to "batching" ... opening up large number of windows resulting in transmitting multiple back-to-back packets (aggravating congestion).

One of the characteristics is XTP can do reliable transmit in minimum of 3-packet exchange ... while TCP requires a minimum of 7-packet exchange (VMTP was in-between at minimum of 5-packet exchange).

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: Wed, 23 Mar 2011 00:04:11 -0400
ps2os2@YAHOO.COM (Ed Gould) writes:
Lyn:

I bow to your expertize and have not read your paper on the 3725.

My sort of well lets say home grown experience with trial and error (sigh a lot of errorr). Gut instinct said the limiting factor was the byte channel (which I understood) was the vast majority of channel hook ups for the box.


re:
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#37 junking CKD; was "Social Security Confronts IT Obsolescence"

you must have strayed from the archived ckd/eckd posts into archived sna/vtam misinformation thread (in a.f.c. newsgroup):
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#34 SNA/VTAM Misinformation

the above also references more in (linkedin) "Greater IBM" group thread.

note that in the above ... series/1 NCP emulation involved a channel interface board that attached to mainframe on same exact channel as 3725 and appeared to the mainframe exactly as a 3725 (slight of hand was that it told all the mainframes that resources were cross-domain ... "owned" by somebody else). Since the interface board and channel appeared identical ... then any related limitation was identical for 3725 and series/1.

slight topic drift ... long ago and far away ... there was an internal effort to convince the communication group to use "peachtree" (processor for series/1) as being significantly more capable than the processor chosen for 37x5.

additional topic drift ... even longer ago, as undergraduate in the 60s ... i added tty/ascii terminal support to cp67. cp67 had 1052 & 2741 support with fancy automatic terminal recognition ... fancy use of 2702 SAD command to re-associate different line-scanner to port. I then intergrated TTY/ASCII ... supporting automatic terminal identification (and re-associating different line-scanner with 2702 SAD command). It worked fine for leased line ... but i wanted to do single dial-up phone number (& hunt group) for all dial-up terminals ... where it broke. While 2702 allowed changing line-scanner on port ... 2702 took shortcut and hardwired oscillator/line-speed on each port. This was somewhat the motivation for the univ. to start clone controller project, reverse engineering mainframe channel interface, building mainframe channel board for interdata/3 and programming interdate/3 to simulate 2702 ... but also doing automatic line-speed operation. four of us get written up as being responsible for (some part of) clone controller business ... misc. past posts
https://www.garlic.com/~lynn/submain.html#360pcm

decade or so ago, was in large datacenter with a many generation descendent of that box handling large percentage of dial-up POS cardswipe terminals in the country ... claim was that the channel interface board hadn't changed ... although it was a many times descendent of interdate/3 (including name change when perkin/elmer bought interdata).

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 23 Mar 2011 06:50:37 -0700
John.McKown@HEALTHMARKETS.COM (McKown, John) writes:
Perhaps "not financially viable". "We" complain about how much z/OS costs right now. Imagine the howls of rage if IBM were to increase the cost of z/OS by 10% (to pick a round, random, number) and say that it was to allow z/OS to use FBA devices. PDSes are a integral part of z/OS (like it or not). Many people still dislike PDSEs. PDSs can't exist without CKD. So to go "pure" FBA (to remove the dependency on ECKD) would require a huge investment. Now, to add FBA support for access methods which are inherently FBA compatible (VSAM et al.) would likely be easier.

as I've mentioned a number of times before ... long ago and far away the group told me that even if I provided them fully integrated and tested FBA support ... I still needed a $26M business case to cover training and documentation ... and I could only use incremental new sales in the business case (say $200M-$300M additional disk sales) ... and wasn't able to use life-cycle cost savings (that were enormous ... both for the company as well as customers ... totally dwarfing everything else). misc. past posts mentioning ckd, fba, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

recent "MVM" thread ... from the definition in IBM Jargon as the original name for MVS, there was an enormous simulation layer added going from MVT to OS/VS2 ... basically, initially CCWTRANS was imported from CP67 (virtual machine vm370 percusor on 360/67) into EXCP processing which had to scan the passed channel program and build a duplicate with real addresses for execution.
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

Now since all the "real" CKD disks have been FBA for a long time, then for decades, there has been a fairly large simulation layer (in the controller) that takes channel programs and perform emulated CKD function. There is roughly equivalent in various of the 370 simulators that run on intel & other platforms, with their own software layer simulating CKD function on FBA devices.

So possible transition phase (decades ago) would have been to enhance official access methods to support native FBA ... ... and then include a multi-track search emulation layer in the EXCP channel program translation ... doing the same exact function currently performed in the lower layers, since *ALL* disks have been native *FBA* for some time (somebody has to be doing all that simulation). That would go a long way to accelerating weaning the dependency off multi-track search.

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 23 Mar 2011 10:26:04 -0700
re:
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#37 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#43 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"

there is also some possibility that the opposition to my providing FBA support was simply POK favorite son operating system campaign against me.

after transferring to the west coast, they let me wander around and get into trouble. In the disk development and test labs (bldg. 14 & 15 on san jose plant site) ... they were doing stand-alone, dedicated time, around the clock, 7x24 scheduled testings. They mentioned that they had tried MVS ... hoping to do multiple concurrent testing in operating system enviornment ... but even with just a single "testcell" (development device), MVS had 15min MTBF.

I offered to redo IOS to make it absolute bullet proof & never fail ... providing them with multiple concurrent, "on-demand" testing (significantly increasing development productivity since they now could test anytime they needed w/o having to wait for scheduled, dedicated time). I did an internal report on many of the items which happened to make passing reference to the MVS 15min MTBF.

I was then called by somebody from the POK favorite son operating system ... and foolish me, I thot it was going to be about getting all the enhances incorporated, but they were bringing down their forces on my head ... wanting to know who my manager was ... and trying to make sure I never mentioned anything about them again (preferrably even no longer being an employee).

Ferguson & Morris 1993 book describes that in the wake of FS failure, the corporate culture had been replaced with sycophancy and make no waves ... or in Boyd terms having to make a career choice between To Be or To Do ... from dedication of Boyd Hall at Air Force Weapons School, 17Sep1999 ... reference
https://www.garlic.com/~lynn/2000e.html#35

This is discussed recently in (linkedin) former/current IBM group ... in IBM Jargon definition of "fast track" (sub)thread:
https://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#13 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#15 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#16 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#78 I actually miss working at IBM
https://www.garlic.com/~lynn/2011e.html#7 I actually miss working at IBM
https://www.garlic.com/~lynn/2011e.html#9 I actually miss working at IBM

since they were already doing their worst ... it didn't matter later, related to 3380 ship (had been announced Jun80) ... I send email about standard collection of error tests (to be expected at customers) ... MVS was failing in all cases & in 2/3rds of the cases, there was no indication of what forced the re-ipl ... old email:
https://www.garlic.com/~lynn/2007.html#email801015

misc. past posts getting to play disk engineer in bldgs 14&15 (which still exist at the plant site, although many others have been plowed under)
https://www.garlic.com/~lynn/subtopic.html#disk

... footnote ... I had sponsored Boyd's briefings at IBM ... misc. past posts
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 23 Mar 2011 12:08:47 -0700
BillF@MAINSTAR.COM (Bill Fairchild) writes:
It was. ECKD was announced in the early- to mid-80s, which was 25+ years ago, which is several decades ago. Not all users respond quickly.

as expected ... they were using "eckd" ... and just not bothering to fully qualify ... since the transition occurred so many decades ago ... it possibly appeared superfluous at this point for the distinction (unless in legacy discussion that specifically is about differences).

past posts in thread:
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#37 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#43 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#45 junking CKD; was "Social Security Confronts IT Obsolescence"

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: 23 Mar 2011 19:36:13 -0700
John Chase wrote:
Actually, it's "favorite son" operating system; as in "most favored" or "takes precedence over all others" or "gets all the attention". Might also be an oblique reference to the dead "Future System" that was to be the "be all and end all" of operating systems.

re:
https://www.garlic.com/~lynn/2011e.html#45 junking CKD; was "Social Security Confronts IT Obsolescence"

in the wake of FS demise ... there was mad rush to get stuff back into the 370 hardware & software product pipelines (there are claims that the distraction of FS allowed clone processors to gain market foothold) ... doing 303x (3031 was 158, 3032 was 168, 3033 started out 168 wiring/layout with faster chips) in parallel with 370/xa ... some discussion of FS, 303x, and 3081
http://www.jfsowa.com/computer/memo125.htm

POK managed to convince corporate to kill vm370, shutdown the vm370 development group and move all the people to pok to support mvs/xa development (or otherwise mvs/xa wouldn't be able to meet ship schedule). Endicott managed to save vm370 product mission ... but had to reconstitute a development group from scratch.

The shutdown strategy for the vm370 product group was to not inform them until the very last possible minute ... minimizing the number of people that might find something else. The information was leaked ... resulting in witch hunt to find the person responsible (extremely paranoid atmosphere in the bldg. during that period). There was joke about the head of POK was major contributor to vax/vms ... because so many of the development group went to work on vms.

the MVM upthread historical reference:
https://www.garlic.com/~lynn/2011d.html#73

has os/vs2 release 1 (SVS) plus delta for os/vs2 release 2 (MVS) on "glide path" to os/vs2 release 3 (FS). Also mentioned, simpson (from hasp, aka jes2) did RASP ... basically paged-mapped MFT. He then left, and was redoing RASP from scratch (in clean room), at Amdahl.

as an aside ... one of the "nails" in the FS coffin was that if ACP (TPF) were run on FS machine built out of the fastest circuits then available (370/195) it would have the throughput of 370/145 ... 1/20 to 1/30 the thruput of eastern acp/SystemOne (on 370/195).

misc. past posts mentioning ckd, fba, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

another FS reference:
https://people.computing.clemson.edu/~mark/fs.html

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch
Date: Thu, 24 Mar 2011 10:13:17 -0400
kenney writes:
I am not sure what this has to do with computer architecture. However there was and is nothing stopping institutions applying for a retail banking licence in the UK. The whole crisis is more complicated than just focussing on the US indicates. Northern Rock had no exposure to US sub prime but failed due to the inter bank market seizing up, Royal Bank of Scotland was mainly brought down by the cost of the ABM Ambro takeover and Lloyds by taking over HBOS which did have US exposure though commercial property had the most losses.

re:
https://www.garlic.com/~lynn/2011e.html#41 On Protectionism

there are all sorts of collaterial damage from a bubble & crash. Another feeze ... was in the US there was a point that the bond market siezed up ... when a large precent of the general investors realized that the rating agencies were "selling" (triple-A) ratings (on toxic CDOs) ... creating doubt whether any ratings could be trusted. Buffett eventually steps in and started offering insurance to unfreeze the market.

Warren Buffett to the Rescue, Credit Crisis Creates Opportunities
http://www.marketoracle.co.uk/Article3723.html

also, as mentioned in the previously referenced estimate of $27T in triple-A toxic CDO transactions during the bubble, they weren't just being bought up by US institutions:
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

from above:
The bundling of consumer loans and home mortgages into packages of securities -- a process known as securitization -- was the biggest U.S. export business of the 21st century. More than $27 trillion of these securities have been sold since 2001,

... snip ...

In the late 90s, we were asked to look at integrity of the information in securitized loan documentation (since toxic CDOs with compromised information had been used during the S&L crisis for fraud). Part of this was possibility of leveraging security chip to help with the information integrity issue ... this was akin to reference about being asked to improve the integrity of trading transactions (by NSCC) ... mentioned here
https://www.garlic.com/~lynn/2011e.html#39 Back to architecture: Analysing NYSE data

All this goes for naught when unregulated loan originators could just eliminate documentation and instead pay rating agencies for triple-A ratings ... no longer an issue of the integrity of the supporting documentation ... when they found a way for eliminating the supporting documentation ... by just paying the rating agencies for the rating they wanted ... part of the whole no-down, no-documenation mortgages ... things that just slowed down how fast they could manufacture the toxic CDOs (and lots of vested interest with huge appitite for churning triple-A rated toxic CDO transactions at the highest rate possible).

a decade ago, I give presentations on attack/exploit & countermeasures for payments as part of one week program for the Lloyd principles that dealt in retail store fraud insurance. part of it was how crooks could attack/exloit all the anti-fraud/anti-theft measures.

The chip (I had done) could be used in payments ... so also presented the countermeasures I had used to address various kinds of vulnerabilities that could be found in existing/other solutions. got to have lunch with the outgoing lloyd's chairman (position that rotates between the lloyd's syndicates). post in comp.arch from month ago mentioning the chip
https://www.garlic.com/~lynn/2011c.html#58 RISCversus CISC

more recent post about doing walkthru of certified security chip fab in dresden ... checking process:
https://www.garlic.com/~lynn/2011e.html#24 The first personal computer (PC)

As mentioned, leading up to GLBA (and supposedly the main purpose of GLBA), both Walmart and M'soft had been making noises about getting into financial services ... and there were lots of special interests that were doing everything they could to oppose that (there is enormous profit margin in the current infrastructure and there was real concern that Walmart/m'soft competition might significantly lower that profit margin).

In the past decade, Walmart tried to do something of end-run by buying an existing ILC charter (didn't come under federal regulation but allowed national operation). Walmart does something like 25-30 of retail POS transactions in the US. Their (one of the four large too-big-to-fail) "merchant acquirer" gets the interchange fee from those POS electronic payment transactions. Walmart stated purpose was that the ILC charter was solely to become its own "merchant acquirer" (effectively eliminating that part of cost of doing business). However, the large-bank operations rallied the community banking infrastructure to lobby congress to block the loophole (with FUD that somehow being its own "merchant acquirer" and eliminating the interchange fee from one of the largest too-big-to-fail institution whould result in putting all the community banks in the country out of business).
http://www.forbes.com/facesinthenews/2006/04/10/walmart-banking-utah-cx_po_0410autofacescan03.html
http://www.consumeraffairs.com/news04/2006/07/congress_walmart_bank.html

I visited Bentonville a couple of times to discuss how to reduce fraud/theft/costs at POS & cost of doing business (using chip I had designed).

A decade ago, there were a variety of products that were being marketed for "secure internet payment transactions" (including mine). Business was quite skewed so not a lot of merchants had to be contacted to reach 80% of all transactions ... and there was high acceptance for the products. Merchants had been conditioned for decades that big component of "interchange fee" was related to fraud statistics ... and the merchants were expecting big decrease in their interchange fees (internet/online modeled after MOTO which has the highest fraud rate & interchange fees) that came from high integrity transactions reducing fraud. Then a congitive dissonance moment when the merchants were told that the banks had decided the "secure internet" interchange fees ... would be effectively a surchange on top of the highest fee ... and the whole thing collapses (after decades of indoctrination that interchange fee is proportional to fraud ... they rebelled at being told the rules just got reversed). Part of the issue is that European institutions, less than 10% of their bottom line came from payment transactions ... while for large US institutions it was more like 40% to 60%. An order of magnitude reduction in those fees (from highest fraud rate to lowest fraud rate) would have a big bottom line hit (in the US) ... there are lots of past discussions how to leverage fraud rates for significant profits (secure transactions, eliminates majority of the fraud, effectively commoditizing the payment infrastructure and reduces the barrier to entry for lots of competititon).

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch
Date: Thu, 24 Mar 2011 10:47:29 -0400
re:
https://www.garlic.com/~lynn/2011e.html#48 On Protectionism

one of m'soft financial services foreys was bill payment (didn't need bank charter). I had done design & scaling for national bill consolidation & payment platform. m'soft buys into the project and "moves" the effort to windows NT platform. At some point their group agrees that windows NT wouldn't scale for the load, and I get tagged to give the presentation to m'soft ceo (that a different platform needs to be used, they don't even want to be in the room when I do the presentation). Shortly before I was scheduled to give the presentation, one of their executives makes the strategic decision that the rollout of the service would be staged to just what their server platform can support (and load is increased as scaling of their server platform is improved).

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: junking CKD
Newsgroups: alt.folklore.computers
Date: Thu, 24 Mar 2011 13:55:29 -0400
historical/hysterical folklore discussion in bit.listserv.ibm-main about junking CKD:
http://groups.google.com/group/bit.listserv.ibm-main/browse_thread/thread/c5fb88afc2b05a63#

it had started from comment I had made in this thread:
https://www.garlic.com/~lynn/2011e.html#31 Social Security Confronts IT Obsolescence

part of reply was to somebody's reference that SSA also stood for an IBM disk serial architecture ... much earlier reference
https://www.garlic.com/~lynn/95.html#13 ssa, grump

other of my (archived) posts in the above thread:
https://www.garlic.com/~lynn/2011e.html#35
https://www.garlic.com/~lynn/2011e.html#37
https://www.garlic.com/~lynn/2011e.html#43
https://www.garlic.com/~lynn/2011e.html#44
https://www.garlic.com/~lynn/2011e.html#45
https://www.garlic.com/~lynn/2011e.html#46
https://www.garlic.com/~lynn/2011e.html#47

--
virtualization experience starting Jan1968, online at home since Mar1970

On Protectionism

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On Protectionism
Newsgroups: comp.arch, sci.econ
Date: Thu, 24 Mar 2011 14:37:10 -0400
Robert Myers <rbmyersusa@gmail.com> writes:
I expect that the history of World War II reads rather differently in Russian history texts, and I suspect that they are closer to having it right.

I was at conference a couple months ago that had presentation from group that got copies of russian archives (military, state, political) from their afgan war period and have been pouring through the records (group includes some number of former russians) ... whole lot of stuff that was eventually learned ... that we apparently have yet to learn.

my wife's father commanded an engineering combat group in ww2 ... and towards the end was out in front of other units (repairing damage on roads, bridges, etc) ... I've been to the national archives and made copies of his status reports from the period .... recent post w/extract from one
https://www.garlic.com/~lynn/2011d.html#37 The first personal computer

declassification tag that had to be on all copies

nara declassification


https://www.garlic.com/~lynn/dectag.jpg

at the end, frequently being ranking officer ... he acquired collection of officer daggers as part of surrenders ... piece of old snapshot (nearly all of the ww2 stuff was stolen a few yrs ago):

german dagger board


https://www.garlic.com/~lynn/daggers2.jpg

he was also involved in liberating some camps ... some speculation that contributed to not wanting a field command in germany after hostilities. he was then posted to nanking as an adviser (MAGIC) (and got to take his family).

there is all sorts of stuff in current US history texts that read differently. in fact, my wife's father was awareded a set of history books at west point for some distinction. they are from a lecture series in the 1880s. They also read differently than current texts.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM100 - Rise of the Internet

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Mar, 2011
Subject: IBM100 - Rise of the Internet
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

Other internet trivia ... old 9-net email
https://www.garlic.com/~lynn/2006j.html#email881216

for other topic drift ... past posts mentioning interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

Note with regard to the 9-net justification .... the internal network had been larger than the arpanet/internet from just about the beginning until late 85 or early 86. One of the big changes in arpanet/internet after the great cut-over to tcp/ip on 1jan83 ... was starting to see workstations and PCs as network nodes ... while the internal network maintained the severe "terminal emulation" restriction (i.e. only hosts were network nodes and PCs and workstations would interact as terminal emulation ... as opposed to full peer node).

This restriction (both inside and at customers) contributed to senior disk engineer in the late 80s getting a talk scheduled at the internal world-wide, annual communication group conference and opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division (because of the stranglehold that the communication group had on datacenters).
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
virtualization experience starting Jan1968, online at home since Mar1970

You almost NEVER see these for sale, own a 360 console

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Mar, 2011
Subject: You almost NEVER see these for sale, own a 360 console
Blog: IBM Historic Computing
a fairly detailed 360/30 site:
http://www.ljw.me.uk/ibm360/

sometime ago, I was contacted by somebody that was collecting the different varieties/kinds of 360 logo bars that were at the top of machine consoles .. I'm trying to track down that reference.

the 360/30 was my 2nd machine to program on. I got a student job to re-implement 1401 MPIO. The univ. had a 709 with a 1401 front-end that did tape->printer/punch and card reader->tape (and carried the tapes between 709 & 1401, with 709 doing tape->tape).

On path to replacing the 709/1401 with 360/67, the 1401 was temporarily replaced with 360/30. The 360/30 had 1401 hardware emulation ... so there was really any need to rewrite MPIO in 360 ... but it was part of transition learning. I got to design and implement my own multi-tasking monitor, device drivers, interrupt handlers, error recovery, storage management, etc. It eventually was tray of cards (slightly over 2000, didn't quite fit in box anymore).

The univ. shutdown the datacenter from 8am sat. to 8am mon ... so I could have the machine room for 48hrs straight with 360/30 as my personal computer (it was sometimes hard to make monday morning classes after having not slept for 48hrs).

...

ah ... found it:
http://www.ibm-collectables.com/

having to do with style of IBM logo
http://www.ibm-collectables.com/IBMlogo13.html

and the different kinds:
http://ibmcollectibles.com/IBMlogo.html

I vaguely remember some folklore regarding a dept. in armonk that was the official keeper of the logo and something about there being a slight slant.

--
virtualization experience starting Jan1968, online at home since Mar1970

Downloading PoOps?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Downloading PoOps?
Newsgroups: bit.listserv.ibm-main
Date: Fri, 25 Mar 2011 10:32:47 -0400
tuco@CIO.SC.GOV (Bonno, Tuco) writes:
Graduate, College of Conflict Management; University of SouthEast Asia; "I partied on the Ho Chi Minh Trail - tiến lên !! "

Friday PoOps trivia ... was one of the first mainstream IBM pubs to move to cp67/cms script. The motivation was PoOps was a subset of the internal architecture "redbook" (for the red 3ring binder that it was distributed in). With cms script command line option, could print either the full architecture "redbook" ... or just the subset PoOps sections.

other Friday trivia
https://www.garlic.com/~lynn/2011b.html#7

$2.5B "windfall" for IBM (something over $17B in today's dollars) ... would have significantly helped to cover the reported $1b spent on the (failed) Future system effort.
https://www.garlic.com/~lynn/submain.html#futuresys

I had sponsored Boyd's briefings at IBM ... and some of his biographies mentioned him doing stint in command.of."spook base" and IBM's $2.5B windfall. Longer item on "Boeing Plant 2" referencing helping with BCS and IBM mainframes (only couple hundred million in renton datacenter) about time Boyd was command "spook base"
https://www.garlic.com/~lynn/2010q.html#59

old item with lots of detail about spook base (including operation having largest bldg in the region) ... gone 404 ... but lives on at wayback machine:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

above ("Other High Technology Assets") mentions 1130/2250s, 360/50, 360/65s & 2305s and cost(?) $1billion a year to operate.

2250M1 were direct mainframe channel attach ... as undergraduate at univ. I had written driver to interface cp67/cms editor to 2250M1 on the 360/67. "2250M4" was the 1130/2250 combination (2250M1 & 2250M4 were approx. same price).

2301 were fixed-head drum. it was similar to 2303 fixed-head drum ... but transferred data over four heads in parallel ... getting over mbyte transfer rate ... and frequently found as paging devices on 360/67 (but had only 4mbyte capacity). Later 2305s (fixed head disk) with 12mbyte capacity, were common on 370. If NKP had 2305s, they would have been some of the earliest.

Boyd would relate about frequently telling everybody about how it wouldn't work (in part because other things had similar signatures).

other refs:
https://en.wikipedia.org/wiki/Nakhon_Phanom_Royal_Thai_Navy_Base
http://aircommandoman.tripod.com/

other refs to Boyd
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

junking CKD; was "Social Security Confronts IT Obsolescence"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: junking CKD; was "Social Security Confronts IT Obsolescence"
Newsgroups: bit.listserv.ibm-main
Date: Fri, 25 Mar 2011 16:08:30 -0400
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Parenthetical note. The Hebrew word 'efes means "nothing"; when I first heard for Future Systems, I thought the acronym FS to be extremely humorous; I didn't realize that it was also prophetic.

from long ago and far away, IBM Jargon:
FS - n. Future System. A synonym for dreams that didn't come true. That project will be another FS. Note that FS is also the abbreviation for functionally stabilized, and, in Hebrew, means zero, or nothing. Also known as False Start, etc.

... snip ...

I have a random signature setting that I periodically turn on ... randomly selects an entry from one of three randomly selected files ... IBMJARGON, 6670 sayings (file of quotations, we had modified the 6670 print driver to include random selection for output on the separator page), and zippy the pin head.

--
virtualization experience starting Jan1968, online at home since Mar1970

In your opinon, what is the highest risk of financial fraud for a corporation ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Mar, 2011
Subject: In your opinon, what is the highest risk of financial fraud for a corporation ?
Blog: Financial Crime Risk, Fraud and Security
posted recently by somebody: "Enron was a dry run and it worked so well it has become institutionalized" ... of course MCI/WORLDCOM was right up there also. Supposedly SOX was passed to prevent similar events in the future. However, apparently because GAO didn't think SEC was doing anything, it started doing reviews of public company financial filings and reports on significant uptick in filings that were fraudulent (or possibly just major audit errors; things that SOX was billed as preventing even for executives going to jail). The motivation was boosting executive compensation and even if correct financials were later refiled, executive compensation wasn't corrected.

The person that tried for a decade to get SEC to do something about Madoff, testified in congressional hearings that tips turn up 13 times more fraud than audits; that SEC didn't have a "TIP" hotline ... but did have a 1-800 for corporations to complain about audits. There have been comments that the only really effective part of SOX was about informants (but again, that does require an organization that is willing to investigate and prosecute).

There have been trivial number of large corporate fines for the fraudulent public company financial filings ... but without the executive compensation being touched (in some cases hundreds of millions ... or going to jail) ... i.e. still not touching the motivation behind the fraudulent filings (aka there was lots of publicity about how SOX was going to fix all this stuff and send the executives to jail ... and nothing appears to have happened).

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA/VTAM Misinformation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SNA/VTAM Misinformation
Newsgroups: alt.folklore.computers
Date: Sat, 26 Mar 2011 00:05:24 -0400
re:
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#34 SNA/VTAM Misinformation

from IBM Jargon
PROFS - profs n. Professional Office System. A menu-based system that provides support for office personnel such as White House staff, using IBM mainframes. Acclaimed for its diary mechanisms, and accepted as one way to introduce computers to those who don't know any better. Not acclaimed for its flexibility. PROFS featured in the international news in 1987, and revealed a subtle class distinction within the ranks of the Republican Administration in the USA. It seems that Hall, the secretary interviewed at length during the Iran-Contra hearings, called certain shredded documents PROFS notes as do IBMers who use the system. However, North, MacFarlane, and other professional staff used the term PROF notes. v. To send a piece of electronic mail, using PROFS. PROFS me a one-liner on that. A PROFS one-liner has up to one line of content, and from seven to seventeen lines of boiler plate. VNET

... snip ...

part of justification to convert the internal network to SNA
https://www.garlic.com/~lynn/2011.html#email870306

was telling executives that PROFS was a VTAM application ... old email reference:
https://www.garlic.com/~lynn/2006x.html#email870302

misc. past posts mentioning the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Now the PROFS group wasn't w/o their own shortcomings. For the email client code ... they used a very early version of VMSG. Later, when the VMSG author offered the PROFS group a much enhanced version, the group denied that they were using VMSG and tried to get the author fired. Things subsided a little when the VMSG showed that his initials were carried in a non-displayed field in every PROFS note.

past posts mentioning VMSG & PROFS:
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#39 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#40 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2002h.html#58 history of CMS
https://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL)
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2005t.html#43 FULIST
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2007f.html#13 Why is switch to DSL so traumatic?
https://www.garlic.com/~lynn/2007v.html#54 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#55 An old fashioned Christmas
https://www.garlic.com/~lynn/2008k.html#59 Happy 20th Birthday, AS/400
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2010.html#1 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control
https://www.garlic.com/~lynn/2010d.html#61 LPARs: More or Less?
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#82 A History of VM Performance

--
virtualization experience starting Jan1968, online at home since Mar1970

Collection of APL documents

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Mar, 2011
Subject: Collection of APL documents
Blog: IBM Historic Computing
I have copy of Aug 1968 APL360: User's Manual ... part of 1st page reproduced here
https://www.garlic.com/~lynn/2001.html#2

possible the largest APL service ever was the internal world-wide sales&marketing, online HONE system (also HONE was the largest "cloud" service in the 70s & 80s).

HONE had started out after the 23jun68 unbundling announcement which included starting to charge for application software & SE time. Several cp/67 datacenters were put in with branch office access to give SEs hands-on experience with operating systems in virtual machines.

The cambridge science center had also done a port of APL\360 to CMS for cmsapl. As port of cmsapl, the APL\360 storage management had to be redone for operating in "large" virtual memory (APL\360 storage management for "small" 16kbyte to 32kbyte workspaces that were swapped ... resulted in severe thrashing with large virtual memory, demand paged workspaces).

There was also an APL API added that allowed access to CMS system services. Cambridge allowed external access to their cp/67 system, and besides students & staff of various Cambridge area univ, Armonk business development people loaded the most valuable corporate asset (detailed customer information) on the Cambridge system and developed business models in APL. Cambridge took a lot of heat for the system services API because it "violated" APL purity (the API along with virtual memory large workspaces, enabled lots of "real-world" application ... later the system services API was replaced with "shared variable" ).

There started to be a lot of APL-based sales&marketing support applications being deployed on HONE. Eventually the sales&marketing support applications came to dominate (and the SE virtual machine activity withered away). In the early 70s, HONE clones were starting to be installed at several locations around the world. In the mid-70s, the US HONE datacenters (by this time moved to vm370 platform) were consolidated in Silicon Valley (in bldg. next door to the current Facebook bldg, although it has a different occupant now). Late 70s, HONE VM370 was extended with support for single-system-image cluster support with multiple large multiprocessors in large shared disk farm, loosely-coupled configuration supporting load-balancing and fall-over. After cal. earthquake in the early 80s, the US HONE operation was replicated first in Dallas and then a 3rd in Boulder ... with load-balancing and fall-over between the three locations.

Along the way, the palo alto science center (across the back parking lot from consolidated HONE) morph cms\apl into apl\cms for vm370. The palo alto science center also does the 370/145 APL microcode assist (APL apps on 145 with the microcode frequently run as fast as APL on 370/168 w/o the assist). They also do the IBM 5100 that runs a subset of apl\360 with some 360 emulation.

misc. past posts mentioning HONE &/or APL
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers, alt.usage.english
Date: Sat, 26 Mar 2011 10:06:48 -0400
Leslie Danks <leslie.danks@aon.at> writes:
I remember Caldera as the first Linux distro I ever acquired - on a CD-ROM provided with the book. I think I bought it in a small bookshop on the way to Damascus, but my memory might be playing tricks on me.

part of the whole SCO stuff ... with name confusion needed score card to tell who was who
https://en.wikipedia.org/wiki/SCO_Group#Caldera_Systems
and
https://en.wikipedia.org/wiki/SCO_Group
old sco:
https://en.wikipedia.org/wiki/Santa_Cruz_Operation

part of the legal stuff
https://en.wikipedia.org/wiki/SCO_v._Novell
and
https://en.wikipedia.org/wiki/Novell

note that in the early time-frame of the above ... the san jose disk division had a fileserver project called "DataHub". Part of the implementation was being done by a small group in Provo under a work-for-hire contract (one of the people in san jose was commuting to Provo nearly every week). At some point the company decides to kill the project ... and the operation in Provo was allowed to retain rights to all the work they had done.

misc. past posts mentioning datahub
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006l.html#39 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007f.html#17 Is computer history taught now?
https://www.garlic.com/~lynn/2007j.html#49 How difficult would it be for a SYSPROG ?
https://www.garlic.com/~lynn/2007n.html#21 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
https://www.garlic.com/~lynn/2007n.html#86 The Unexpected Fact about the First Computer Programmer
https://www.garlic.com/~lynn/2007p.html#35 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#53 folklore indeed
https://www.garlic.com/~lynn/2008e.html#8 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2008p.html#36 Making tea
https://www.garlic.com/~lynn/2008r.html#68 New machine code
https://www.garlic.com/~lynn/2009e.html#58 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2010.html#15 Happy DEC-10 Day
https://www.garlic.com/~lynn/2011b.html#3 Rare Apple I computer sells for $216,000 in London

--
virtualization experience starting Jan1968, online at home since Mar1970

In your opinon, what is the highest risk of financial fraud for a corporation ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Mar, 2011
Subject: In your opinon, what is the highest risk of financial fraud for a corporation ?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011e.html#56 In your opinon, what is the highest risk of financial fraud for a corporation ?

SOX also supposedly had SEC doing something about the rating agencies ... which played pivotal role in current economic mess.

oct2008 congressional hearings had unregulated loan originators packaging up (securitizing/CDOs) loans and mortgages and paying the rating agencies for triple-A rating (when both the sellers and the rating agencies knew they weren't worth triple-A).

securitized mortgages had been used during the S&L crisis with doctored documents to obfuscate underlying value for fraud. in the late 90s, we were asked to look at improving the integrity of the supporting documentation (trusted timestamps, trusted signatures, etc). however, with being able to pay for triple-A ratings, supporting documentation was no longer needed (eliminating the issue of supporting documentation integrity), contributing to rise in the "no-down, no-documentation" adjustable rate mortgages done during the period.

reference to $27T
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

from above:
The bundling of consumer loans and home mortgages into packages of securities -- a process known as securitization -- was the biggest U.S. export business of the 21st century. More than $27 trillion of these securities have been sold since 2001,

... snip ...

There was a point when growing number of investors were becoming aware that the rating agencies were selling triple-A ratings on toxic CDOs ... and called into the question whether any ratings could be trusted ... and the bond market froze up. Buffett then steps in to provide "insurance" on muni-bonds to unfreeze the muni-bond market:
http://www.marketoracle.co.uk/Article3723.html

--
virtualization experience starting Jan1968, online at home since Mar1970

End of an era

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: End of an era
Newsgroups: alt.folklore.computers
Date: Sat, 26 Mar 2011 11:28:33 -0400
jmfbahciv <See.above@aol.com> writes:
the one I'm reading about the Shah is written by a guy who is making him appear pathetic; this makes no sense.

Scheuer's (former head of cia bin laden desk)
https://www.amazon.com/Michael-Scheuer/e/B001IGLXVK

recent book on bin laden
https://www.amazon.com/Osama-Bin-Laden-ebook/dp/B004JU1WJK

goes into some amount of detail about lots of disparaging bin laden ... various dirty tricks to discredit him (from parties with all sorts of vested interest). Scheuer's point was that tends to result in seriously underestimating the opposition.

the kindle version has the book only 49% ... much of the rest of the book is supporting references.

another very recent publication that shows a totally different aspect ... but is consistent with scheuer's theme ... is "The Wrong War" (by former Marine that served in Vietnam):
https://www.amazon.com/Wrong-War-Enhanced-ebook/dp/B003YL4M6A

note that all the audio/video in the Kindle edition, doesn't actually work on real kindle.

some of "The Wrong War" issues ... are also in Labyrinth (pdf file is free to download)
https://www.garlic.com/~lynn/2011e.html#18 End of an era

several of the Labyrinth authors are Boyd "acolytes"
https://www.garlic.com/~lynn/2011e.html#33 The real cost of outsourcing (and offshoring)

slightly different (vietnma/boyd) reference (from ibm mainframe thread)
https://www.garlic.com/~lynn/2011e.html#54 Downloading PoOps?

misc. past Boyd references
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

3090 ... announce 12Feb85

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Mar, 2011
Subject: 3090 ... announce 12Feb85
Blog: IBM Historic Computing
3090 ... announce 12Feb85
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

3090 trivia ... referenced 3090 announce refers to two (real) 3370 A2 ... this was for the two 4361s running specially modified version of vm370 release 6 as "service processor" (so all 3090 MVS systems actually required FBA to operate ... since it was required for all 3090 service processors). a few past posts mentioning "3092" service processor:
https://www.garlic.com/~lynn/2008i.html#10
https://www.garlic.com/~lynn/2009b.html#22
https://www.garlic.com/~lynn/2009e.html#50
https://www.garlic.com/~lynn/2010e.html#32
https://www.garlic.com/~lynn/2010e.html#34
https://www.garlic.com/~lynn/2010e.html#38
https://www.garlic.com/~lynn/2011c.html#71

related recent thread in ibm-main mailing list about "junking CKD"
http://groups.google.com/group/bit.listserv.ibm-main/browse_thread/thread/c5fb88afc2b05a63#

earlier generation of electronic storage ... intel 3805 electronic disk ... looks like a fixed-head FBA. This particular issue discusses upgraded to newer release which had split a vm370 kernel routine into multiple different routines... which corrupted some logic involving FBA defined with multiple paths (and one of the paths having CC=3).

3805 predates 3090 expanded storage ... issue between whether electronic storage is used as "fixed block" synchronous transfer (3090 expanded storage) vis-a-vis "fixed block" asynchronous transfer (i/o transfer) is the elapsed time for the operation ... a major part is proportional to distance.

Date: 02/10/83 15:33:02
From: wheeler

re: 3805 (intel native mode); almost got the boxes operational ... having a little hardware problems which forced two bugs in (base) DMKCPW. 3805 is defined as having alternate channels on channel one and two. For some reason the channel one path is inoperative (i.e. TIO CC=3). This causes a PRG9 (divide by zero) in DMKCPW.

DMKCPW is doing the device characteristic read for a FBA device (3805 simulates a fixed head FBA). The first bug is that coming back from the read, DMKCPW fails to check for nonzero condition code on the I/O operation. This causes it to assume the device characteristics information to be in storage & it proceeds to perform misc. divide operations to fill out the RDC table. I fixed this with a LSI (I/O reliability) since I already had some LSI error checking code at this point.

Turns out that isn't the real problem. DMKCPW first does a TIO to the device and will only proceed if it gets a cc=0 ... which shouldn't result in a cc=3 being given on the SIO. Problem appears to have been generated when DMKCPT was split. Sequence of events are that DMKCPT calls DMKCPW with a pointer to device '120' (first path). DMKCPW gets a cc=3 on the TIO to 120 and returns to DMKCPT. DMKCPT then calls DMKCPW with a pointer to device '220' (second path). The TIO successfully completes with cc=0. DMKCPW then builds the IOBLOK to read the FBA device characteristics. Unfortunately, DMKCPW messes around in DMKCPT savewrk to get the device address ... & the field CPW is accessing contains the primary path device address, i.e. '120'. Second fix is a FE level update to DMKCPW to obtain the current path device address, rather than the primary, for performing the device characteristic read.


... snip ... top of post, old email index

"LSI" reference in the above ... refers to the source updates that I did for the disk engineering & product engineering labs to make IOS bullet proof & never fail (of course I can't protect against every programming error introduced by new releases). part of getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

misc. past posts mentioning ckd, fba, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Collection of APL documents

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Mar, 2011
Subject: Collection of APL documents
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents

HONE had a rather enormous APL application (couple hundred kbytes) called SEQUOIA that provided the "user friendly" online environment with large number of bells & whistles for the sales&marketing folks. One of the features I had done when I morphed a bunch of shared memory & paged mapped filesystem from cp67/cms to vm370/cms ... was extended support for shared pages. HONE defined APL interpreter in such a shared environment ... and then in a hack done with the Palo Alto Science Center ... added most of the SEQUOIA "APL code" to the shared environment (significantly cutting down on total real storage footprint to run large number of concurrent users)

Then there were large variety of AIDS and "configurators". Every hardware product had configurator that sales/marketing would specify customer requirements and it would figure out the actual hardware order and all necessary features (by mid-70s, mainframe orders had to first be run through HONE).

The cambridge science center had also done a large amount of work in performance modeling and simulation. One such analytical model implemented in APL was eventually packaged and provided on HONE as the PERFORMANCE PREDICTOR. Salesman could input detailed information about customer workload and configuration and ask "what-if" questions about changes made in workload and/or hardware configuration (i.e. like what is benefit of added one mbyte of real memory). A modified version of the performance predictor was also used by the HONE "single-system-image" implementation to calculate load-balancing.

A decade ago I was doing some performance tuning on a 450k statement cobol application that ran overnight on 40-some large mainframes (bloated configuration that were @$30M). Somebody in Europe and acquired the rights to a descendent of the PERFORMANCE PREDICTOR in the early 90s, ran it through a APL->C convertor and was using it for consulting business. He identified some issues that got 10% improvement and I used some other methodologies to identify a further 14% improvement.

Misc. past posts mentioning SEQUOIA:
https://www.garlic.com/~lynn/2002i.html#76 ,
https://www.garlic.com/~lynn/2002j.html#0 ,
https://www.garlic.com/~lynn/2002j.html#3 ,
https://www.garlic.com/~lynn/2002j.html#5 ,
https://www.garlic.com/~lynn/2003f.html#21 ,
https://www.garlic.com/~lynn/2005g.html#27 ,
https://www.garlic.com/~lynn/2005g.html#30 ,
https://www.garlic.com/~lynn/2006m.html#53 ,
https://www.garlic.com/~lynn/2006o.html#52 ,
https://www.garlic.com/~lynn/2006o.html#53 ,
https://www.garlic.com/~lynn/2007h.html#62 ,
https://www.garlic.com/~lynn/2009j.html#77 ,
https://www.garlic.com/~lynn/2010i.html#13 ,
https://www.garlic.com/~lynn/2011.html#28

--
virtualization experience starting Jan1968, online at home since Mar1970

End of an era

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: End of an era
Newsgroups: alt.folklore.computers
Date: Sat, 26 Mar 2011 19:01:20 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
goes into some amount of detail about lots of disparaging bin laden ... various dirty tricks to discredit him (from parties with all sorts of vested interest). Scheuer's point was that tends to result in seriously underestimating the opposition.

re:
https://www.garlic.com/~lynn/2011e.html#61 End of an era

from today on what do we really know?
http://www.phibetaiota.net/2011/03/nightwatch-on-bin-laden-sightings-many/

--
virtualization experience starting Jan1968, online at home since Mar1970

End of an era

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: End of an era
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 10:41:32 -0400
jmfbahciv <See.above@aol.com> writes:
OK. Thanks. What is Labyrinth?

re:
https://www.garlic.com/~lynn/2011e.html#61 End of an era

referenced
https://www.garlic.com/~lynn/2011e.html#18 End of an era
and
https://www.garlic.com/~lynn/2011e.html#33 The real cost of outsourcing (and offshoring)

reference in the above
http://www.phibetaiota.net/2011/03/event-19-20-mar-c-span-pentagon-labyrinth/
and
http://dnipogo.org/labyrinth/

above has free pdf

but paperback also available at amazon
https://www.amazon.com/exec/obidos/ASIN/0615446248/ossnet-20

from amazon review (some number authors, Boyd "acolytes"):
The Pentagon Labyrinth aims to help both newcomers and seasoned observers learn how to grapple with the problems of national defense. Intended for readers who are frustrated with the superficial nature of the debate on national security, this handbook takes advantage of the insights of ten unique professionals, each with decades of experience in the armed services, the Pentagon bureaucracy, Congress, the intelligence community, military history, journalism and other disciplines

... snip ...

as per other reference ... in CSPAN broadcast (with several of the authors) ... there was strong theme that MICC (military-industrial-congress complex) is working towards the economic collapse of the Pentagon (which is similar to bin Ladin's objective).

I've made references that Scheuer's bin Ladin shares many of the qualities of Coram's Boyd (a Boyd biography) ... but with polar opposite objectives.

There have been references that the venality of MICC is dwarfed by that of wall street ... referenced here (also has a number of Boyd "acolytes" ... also references extra $2T "surge" in pentagon spending 1998-2010, $1T can be accounted for by the wars, the other $1T appears to be "who knows?"):
https://www.garlic.com/~lynn/2011d.html#83 End of an era

one of the points made in the CSPAN labyrinth interviews, was that projects/products get designed so that every major voting district has some piece (locking in the congressional votes and making it almost impossible to terminate bad programs) ... which enormously inflates costs and has significant downside on quality (numerous weapons projects found horrible problems collecting all the pieces from all over and trying to fit them together, the effectiveness of numerous weapons are drastically reduced because of the jerry-rigging that occurs). During CSPAN program was also off-hand comment that Boeing 787 program was afflicted with similar mentality and that re-engineering things to get things to fit resulted in (at least) 2yr delay/setback.

and earlier (Pentagon & wallstreet venality) references
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#75 America's Defense Meltdown
https://www.garlic.com/~lynn/2011b.html#0 America's Defense Meltdown
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
https://www.garlic.com/~lynn/2011c.html#45 If IBM Hadn't Bet the Company

misc. Boyd references and past posts:
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

|What is the maximum clock rate given the state of today's technology?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: |What is the maximum clock rate given the state of today's technology?
Newsgroups: comp.arch
Date: Sun, 27 Mar 2011 10:55:57 -0400
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
As Terje Mathisen mentions occasionally here, disk is the new tape, memory is the new disk, cache (last level?--I guess it depends on what year's memory one compares it to) is the new memory.

processor caches are larger than memories from the 70s (sub-mbyte to few mbytes). I recently gave a (repeat of old) talk on 60s&70s demand page technology managing memories.

Early 80s saw big explosion in memory sizes. I had presentation from the period that disk relative system throughput had declined by an order of magnitude between the late 60s and the early 80s (disks got faster, but other system compenents also got faster ... by an order of magnitude more). Early 80s saw a increased use of memory for "disk caching" to compensate for the declining relative system thruput of disk (in part because large increase in amount of memory on systems).

As undergraduate in the late 60s, I had done dynamic adaptive resource management (sometimes also called "fairshare" scheduling since default policy was "fairshare") and something I called "scheduling to the bottleneck". Trying to constantly identify system throughput bottlenecks made me somewhat more sensitive to the changes that were occurring.

--
virtualization experience starting Jan1968, online at home since Mar1970

Other early NSFNET backbone

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Other early NSFNET backbone
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 11:15:29 -0400
re:
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#16 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#40 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#76 Other early NSFNET backbone

just now on TV was newsbite that by the end of next year, every S. Korean home will have broadband that is 200 times faster than the avg. US broadband.

old NSFNET backbone email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

and old email/article from early days of NSFNET backbone period ... quote: "When it was created in 1985, Congress allocated more money than the foundation had requested. In fiscal 1986 Congress instructed the foundation to give the program exactly what it had requested" ... Congress was spurring funding for national competitiveness and then cut-back; this impacted proposal for me doing possibly $20m with various related NSFNET backbone activity. What was supposed to be "NSFNET backbone" went thru some number of metamorphoses.

Date: 8 December 1986, 09:43:31 EST
To: wheeler
Subject: (copy) Article from Chronicle of Higher Education - 12/3/1986

Title: $6-Million Shortfall for NSF Supercomputers Could Hamper Some University Operations

The National Science Foundation's supercomputer division has received $6-Million less than it sought for the current fiscal year, foundation officials have announced.

While that still gives the program $3.4 million more than the $45.2 million it received last year, the shortfall could lead to financial crisis for university centers that have depended on selling supercomputer time to the foundation.

In addition, two supercomputer centers supported by the foundation -- at Cornell and Princeton Universities -- are being pressed to delay computer purchases and cut back on training and assistance to preserve the allocations for the other three centers.

The shortfall is said to be evidence of the program's waning popularity at the foundation. In recent months the program was moved out of the director's office and made part of the directorate for computer and information science and engineering. In addition, the program's network section, with its $10-million budget, was splitoff as a separate division. The new division is developing a communications network to allow researchers anywhere in the country to gain access to NSF supported supercomputers.

Observers say the program lost its favored position because the invention of new kinds of computers cut into the potential pool of researchers who might use a supercomputer, and because internal political strife in some of the centers has affected the delivery of services.

Even so, those involved in the supercomputing program never expected to get so much less than they asked for. The program, now run by the foundation's division of advanced scientific computing, had been extremely popular: When it was created in 1985, Congress allocated more money than the foundation had requested. In fiscal 1986 Congress instructed the foundation to give the program exactly what it had requested.

Members of Congress said they liked the program because it could help the United States retain its lead in high technology. Supercomputers, the fastest, most powerful computers commercially available, allow researchers to do computations in a few minutes that would take weeks or months on mainframe computers. There are fewer than 200 supercomputers in the world, and until the NSF program, very few were available to university researchers.

This year, too, things looked good in supercomputing, in spite of the huge federal deficit. Both the House and Senate appropriations committee again instructed the NSF to give the supercomputer program the $53.6-million it asked for. However, when a House-Senate conference committee met to work out a compromise, it dropped any special protections for the program. Instead, Congress urged the science foundation to finance the program to "the maximum extent possible" -- a recommendation, not a requirement with the force of law.

According to John W.D. Connelly [sic], director of the division of advanced scientific computing, $3.9-million of the shortfall will come from the $37.8-,million budgeted for providing supercomputer time to foundation-supported researchers, $1.1-million from the $10.9-million budgeted for networks, and $1 million from the $8.8-million budgeted for "new technology," which includes financing for the Cornell University center.

For the past two years the NSF has bought time on supercomputers at the Universities of Colorado and Minnesota, and at Purdue University. That program, budgeted at $2.3 million this year, "is now over," Mr. Connelly [sic] said. Foundation-supported researchers who used those machines must now use machines at the five NSF supercomputer centers, he said. Some researchers have complained to the foundation that the change will make their work more difficult. Minnesota has the only CRAY 1 supercomputer on a campus, and all three universities have specialized software for their machines that is not yet available at the five foundation-supported supercomputer centers.

In addition, the three universities depend on NSF money to help support facilities that are very expensive to operate. Without that support, says John M. Sell, president of the Minnesota Supercomputer Center, there could be "severe implications." The Minnesota center's $15-million budget barely covers costs and Mr. Sell questions whether he can sell the time reserved for the NSF to anyone else.

"There is a growing demand, but I don't know if we can make up the shortfall," he says.

Next in line to be trimmed are the NSF-supported supercomputer centers at Cornell and Princeton. Officials at both universities say their allotments -- which were set by the original contracts with the NSF -- might be cut as much as $2 million each.

If its budget is reduced, Cornell might have to curtail training and service and delay carrying out plans to add power to its supercomputer. Training and service are particularly important at Cornell, say officials of the center there, because it is an experimental machine, and researchers are still trying to figure out what to do with it.

The John von Neumann Center at Princeton, run by a consortium of 12 universities, is said to be considering a plan that would delay delivery, and therefore payment, on its ETA-10 supercomputer until fiscal 1988. The $20-million ETA-10 still under devlopment by ETA Systems, Inc., an offshoot of Control Data Corporation -- would remain at company headquarters in Minneapolis, and researchers would gain access to it over data-communication lines. Researchers assigned to the Princeton center are now using a rented Cyber 205 supercomputer.

Foundation-supported supercomputer centers at the University of California at San Diego and the University of Illinois are expected to be untouched by any changes in financing. Computers in both centers have been used by researchers for nearly a year, and both were highly rated by peer-review committees.

The fifth NSF-backed center, run jointly by Carnegie-Mellon University and the University of Pittsburgh, is suppoed to received only $4-million from the science foundation this year. According to NSF officials, the Pittsburgh center is doing well.

Foundation officials predict that this is not the last time the supercomputer division will face cuts in its budget request -- some of them possibly more serious than $6-million. In preparation, C. Gordon Bell, assistant director of the foundation's directorate for computer and information science and engineering, has asked foundation and supercomputer-center officials to explore the role of the centers in the face of expected budget limitations and advances in the computer industry that may produce less-expensive, smaller computers with speed and power close to that of today's supercomputer.


... snip ... top of post, old email index, NSFNET email

--
virtualization experience starting Jan1968, online at home since Mar1970

Other early NSFNET backbone

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Other early NSFNET backbone
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 12:12:13 -0400
Al Kossow <aek@bitsavers.org> writes:
Sounds like the majority of the money went to funding supercomputer centers and not to communications infrastructure. In the long run, Moore's Law took care of the problem getting enough compute cycles. Could we have saved five years if that money would have been thrown at building faster networks instead of a couple of very high priced supercomputer centers?

On the other hand, networking routing needs compute power too, just in a different form than that found in supers, and NCSA did produce several useful tools (NCSA Telnet, Image, and Mosaic)


re:
https://www.garlic.com/~lynn/2011e.html#67 Other early NSFNET backbone

earlier in the fall before the budget cut (NSF might do $17M of the $20m and get DOE to make up the remainder):

Date: 09/15/86 11:39:27
From: wheeler

talking to aaaaa, xxxxx said that no funding available out of ATSL for hsdt, ... xxxxx says he is only getting 5 incremental h.c. next year (although we've been told he already has 20 atsl h.c.).

aaaaa is going ahead to C-group ATSL funding commitee to try and get direct HSDT funding ... since nobody directly associated with ATSL is supporting this.
....
Currently scheduled to see zzzzz on the 29th ... and then leaving for Europe on the 1st ... back on the 14th. *system* stuff is coming to head and it would fit well into HSDT project ... either as IBU or some other strengthen position (aaaaa says bbbbbb went to NSF on HSDT for $20m ... they implied that it was about $3m too high for NSFs blood, but not bad & something could be worked with DOE).

Giving HSDT presentation to BAYBUNCH on Oct. 21st. aaaaa is talking about Livermore presentation around that time frame also.


... snip ... top of post, old email index, NSFNET email

another $20m email reference later the same day
https://www.garlic.com/~lynn/2011e.html#email860915

referenced trip to europe included giving presentation on VM performance history, recent references:
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#74 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#79 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#82 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
https://www.garlic.com/~lynn/2011c.html#90 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#92 A History of VM Performance

misc. posts mentioning nsfnet backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

misc. posts mentioning hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 12:50:36 -0400
jmfbahciv <See.above@aol.com> writes:
In that time frame, a lot of code was not copyrighted correctly, if at all. Shipping any code without a legal copyright was considered putting that code in the public domain. Thus, if the code was not copyrighted or if it had an "illegal" copyright statement, then someone picking up and using the code is not stealing.

By the early 80s, we got an edict from legal that we had to put ASCII-readable copyright statement in the EXEs, RELs, UNVs, LIBs, all sources, and any other file we shipped. That was a big editing project.


I saw a lot of copyrighting notices appear after the 23jun69 unbundling announcement and starting to charge for application software. misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

every program source file got a boiler plate notice added. there was quite a bit of early discussion whether every kernel routine also required a "readable" copyright notice (in core image) ... or was it sufficient that the "readable" copyright notice just appeared a single time somewhere in the kernel (core image).

--
virtualization experience starting Jan1968, online at home since Mar1970

Other early NSFNET backbone

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Other early NSFNET backbone
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 14:46:15 -0400
Morten Reistad <first@last.name> writes:
I work with some broadband ISPs that have phased out TV as a separate technology. They run all IP. Ca 130 channels, 7 in very high definition, run on their backbone. All in IP. The fiber customers can use all of them, DSL customers cannot choose the highest definitions of the 7 highest def channels.

They support multiplexed HDMI feeds into the TV world in the homes. Just the TV load peaks at around 2.5 gigabits.

The Internet is wide open for as much as the customers want to use. Compared to 100+ tv channels that bandwidt is small fry anyway.

But one very few residential customers manage to use more then 10 megabits.


re:
https://www.garlic.com/~lynn/2011e.html#67 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011e.html#68 Other early NSFNET backbone

saw an article yesterday that US internet bandwidth was starting to be swamped with baby/web cams ... people were turning webcams on various things, babies, kids, puppies, kittens and then opening window at work and leaving window open all day ... suggesting that their neighbors, friends, & relatives also do the same.

--
virtualization experience starting Jan1968, online at home since Mar1970

Fraudulent certificates issued for major websites

From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Mar, 2011
Subject: Fraudulent certificates issued for major websites
Blog: LinkedIn
Fraudulent certificates issued for major websites
http://www.networkworld.com/news/2011/082511-banks-business-fraud-250120.html

from above:
When going to Google, Microsoft, Yahoo and other sites, beware. Attackers have managed to get valid certificates made for each site.

... snip ...

also ..

Iranian' attackers forge Google's Gmail credentials
http://www.theregister.co.uk/2011/03/23/gmail_microsoft_web_credential_forgeries/
Firm points finger at Iran for SSL certificate theft
http://www.computerworld.com/s/article/9214998/Firm_points_finger_at_Iran_for_SSL_certificate_theft
Comodo warns of serious SSL certificate breach
http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1529110,00.html
Google, Skype, Yahoo Targeted by Rogue Comodo SSL Certificates
http://www.pcworld.com/businesscenter/article/223147/google_skype_yahoo_targeted_by_rogue_comodo_ssl_certificates.html

Certificates grew up in the early 80s as electronic analogy of letters of credit/introduction (from sailing ship days). They addressed an authentication issue from the day, providing an offline solution when online was not available, scarce and/or very expensive (pricing less than expensive online solution). Going into 90s, online was becoming ubiquitous and price was dropping dramatically ... resulting in rapidly shrinking "no-value" market segment for certificates. The result was switch to pixie-dust & FUD marketing ... certificates were becoming both redundant (with online operations that were justifying expense for higher-value real-time information) AND more expensive

past posts mentioning SSL certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
virtualization experience starting Jan1968, online at home since Mar1970

Collection of APL documents

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Mar, 2011
Subject: Collection of APL documents
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents

Date: 11/04/82 22:45:27
To: wheeler
From: xxxxx
Subject: GENMOD module

Could I ask you to get an assembly listing of the GENMOD module so I can study it to determine what changes I have to make to VS APL. If we make this change, it could have a tremendous impact on the HONE system, above what their SEQUOIA system is giving them. There are several very large APL applications on there .... particularly the configurators. If users are sharing that code as well, we could get some big savings ... not to mention the time savings in just loading the workspaces.
... snip ... top of post, old email index, HONE email

and:

Date: 11/05/82 08:23:45
From: wheeler
To: HONE support

re: pam & apl; xxxxx has in the past set-up meetings with the APL group to modify APL so that it will be able to execute in relocatable shared segments. yyyyy is now in the processing of working on the necessary APL modifications to load workspaces from PAM disks using the SHARED segment option (i.e. similar to the way GENMOD/LOADMOD loads modules with the SHARED segment option). That will mean that only one copy of any public workspace will have to exist in storage at one time (no matter how many users are using it).
... snip ... top of post, old email index, HONE email

csc changes moved from cp67/cms to vm370 release 2
https://www.garlic.com/~lynn/2006v.html#email731212 ,
https://www.garlic.com/~lynn/2006w.html#email750102 ,
https://www.garlic.com/~lynn/2006w.html#email750430

It included CMS paged mapped filesystem support with various bells&whistles included various kinds of shared segment.
https://www.garlic.com/~lynn/submain.html#mmap

I started out with paged mapped filesystem support to cms low level kernel disk i/o routines remapped to vm370 "PAM" api. I also modified CMS routine for loading executables to support specifying that the part or all of paged mapped image was to be loaded "shared". CMS relied heavily on OS/360 compilers that used a convention with fixed addresses as part of the executable image. The restricted executable image to be at the same fixed location in every virtual address space. For some specific routines, I had gone to quite a bit of effort to eliminate the embedded fixed address and allowed CMS to load a "shared" executable image at any virtual address that was convenient/available (same shared image potentially at different virtual addresses in different virtual address spaces). misc. past posts about difficulty in dealing with os/360 address constant convention
https://www.garlic.com/~lynn/submain.html#adcon

In previous SEQUOIA post, I mention PASC making an enhancement so that HONE could include the SEQUOIA APL application as part of the "shared" APL executable image (reducing real storage footprint).

The notes on 4nov82 & 5nov82 refer to "xxxxx" wanting to enhance APL workspace loading (from PAM filesystem) so that they could specified as "shared loading" (one copy across multiple different users/virtual address spaces) and same shared copy could appear at arbitrary different virtual address locations. The 5nov82 reference to "relocatable" is same shared image potentially appearing at different virtual addresses in different virtual address spaces.

past posts mentioning hone &/or apl
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

History--Early Bell System teletypes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History--Early Bell System teletypes
Newsgroups: alt.folklore.computers
Date: Sun, 27 Mar 2011 20:45:51 -0400
Quadibloc <jsavard@ecn.ab.ca> writes:
A 2741 was 50% faster, and produced nicer looking output, and was less noisy. An ASR 33, however, also read and punched paper tape, so it was all the peripherals you needed to do at least some very primitive computing. They did have FORTRAN compilers in those days that would load into 4K (of 12-bit words) on a PDP-8...

but attempting to discuss the subjective experience to kids these days would be difficult.


couple 2741 terminal images:
http://www.columbia.edu/cu/computinghistory/2741.html
http://www.swtpc.com/mholley/MySystem/MySystemPhoto.htm

the science center had two pieces of additional items made for 2741.

There was no space to set paper on either side. sheet of plywood covered in thin formica(?) to match the 2741 color with cutout fit around the terminal housing ... extended couple inches on one side, couple inches in the back and a foot plus on the other side ... enough to lay wide computer fanfold paper (the board could be flipped with the wide part to either side of the terminal).

there was also quarter inch panel of plexiglas that fit in the open paper feed opening ... it had holes cut for the paper roller tabs (that stuck out of the opening) and lay in the opening with enough room for paper paper to be feed in and out. It cut the noise from the typing.

this has some similar selectric pictures
http://www.ibm.com/ibm100/us/en/icons/selectric/

the above has a couple selectric pictures ... although not exactly 2741 terminal. There is picture that looks very similar to 2741 terminal in a desk ... actual 2741 terminal had space on both sides and back about the dimensions for what is shown on the right of the 2nd picture.

The "red" selectric typewriter has about the same sized opening for paper feed as 2741 ... paper feeds in from the back under the roller and out under the spring loaded guide and out the back. The spring loaded guide has two tabs sticking up, which can be used to pull the quide away from the roller underneath when feeding paper. The spring loaded guide holds the paper against the underlying roller so that it is in the correct position when the gulfball strikes (the paper).

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Mon, 28 Mar 2011 13:04:07 -0400
re:
https://www.garlic.com/~lynn/2011e.html#7
https://www.garlic.com/~lynn/2011e.html#36
https://www.garlic.com/~lynn/2011e.html#41
https://www.garlic.com/~lynn/2011e.html#48
https://www.garlic.com/~lynn/2011e.html#56
https://www.garlic.com/~lynn/2011e.html#60

TV business news show in real time discussing individual motivation for enromous fees & commissions behind much of the recent economic mess.

the fees, commissions, bonuses and other games played with the $27T in triple-A rated toxic CDO transactions helps account for the enormous increase in wealth skew (the country now has one of the highest wealth skewed periods in its history ... along with the reports about the disappearing middle class) ... not forgetting the executive bonuses from public company fraudulent financial filings.

FED steps in to rescue many of the institutions warehousing the trillions in triple-A rated toxic CDOs ... but not touching the individuals that hugely profited (lots of obfuscation focusing on businesses and misdirection away from the individuals). It leaves something of scortched earth in much of the rest of the country.

wharton had estimate that 1000 executives were responsible for majority of the mess and it could go a long way to improving the situation if the gov. could figure out some way to eliminate those individuals
https://web.archive.org/web/20080606084328/http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933

the loan business used to be that the institutions kept the loans and were motivated by profit from interest paid over the lifetime of the loan.

being able to pay for triple-A ratings on toxic CDOs, radically changed the business into frenzy of transactions ... as they flowed through the infrastructure, individuals taking enormous fees & commissions on the aggregate transaction value at various stages.

paying for triple-A ratings on toxic CDOs, eliminated any reason to care about loan quality, borrowers qualifications and made supporting documentation superfluous ... individuals at unregulated loan originators saw their revenue solely tied to the aggregate value of loan/mortgages they could write each week.

securitized mortgages had been used with doctored supporting documents for fraud (obfuscate underlying value) during the S&L crisis. in the late 90s, we had been asked to look at various issues of supporting document integrity (trusted timestamps, trusted signatures, etc). with ability to "buy" triple-A ratings ... it eliminate the need for a lot of the supporting documents (eliminating the documents, there was no longer issue of document integrity)

real estate speculators found the no-down, no-documentation, 1% interest only payment ARMs yield 2000% ROI in areas of the country with 20-30% inflation (with speculation fueling the inflation, and constant "flipping" helping drive the transaction volume)

wall street got enormous revenue (on total transaction value) as the transactions flowed through the infrastructure (as well as couple other tricks of the trade)

another part of making all of this work, was need to find institutions where the triple-A toxic CDOs got warehoused (facing a reckening when the bubble bursts, but the primary objective is commissions/fees on the transactions, or in the case of the rating agencies, payments for the triple-A ratings proportional to the stated value of the toxic CDOs).

as the bubble was deflating, some of the wallstreet individuals were exchanging warehoused triple-A rated toxic CDOs (generating additonal transactions maintaining flow of their fees/commissions).

a few trillion manages to disappear into various pockets during the decade.

the real estate speculation/bubble/collapse is close analogy to the '29 stock market crash ... however most of the attention has been on the part of the infrastructure used for warehousing the triple-A rated toxic CDOs (& not the trillions that were siphoned off).

--
virtualization experience starting Jan1968, online at home since Mar1970

I'd forgotten what a 2305 looked like

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Mar, 2011
Subject: I'd forgotten what a 2305 looked like
Blog: IBM Historic Computing
you mean this
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html ,
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2305.html

I reference in this ibm-main mailing list post
https://www.garlic.com/~lynn/2011e.html#54

mentioning Boyd, spook base, & igloo white
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

in reply to somebody's signature block ... they replied that they hadn't thot about it in 40yrs. as in the ibm-main response, i mentioned i hadn't actually known any 2305s attached to 360/65 (lots of 2301 fixed head drums used with 360/67 for paging tho).

There were then a number of "electronic" simulated (paging) devices ... there was special model "1655" from a vendor for internal locations ... that simulated 2305. Then there was the intel 3805 that i recently mentioned in the 3090 discussion (simulated FBA).
https://www.garlic.com/~lynn/2011e.html#62 3090 ... announce 12Feb85

and 2301 fixed-head drum (from 60s)
http://www.columbia.edu/cu/computinghistory/drum.html

from ibm 2305 url:
Known as Zeus during development and first shipped in 1971, the IBM 2305 gave IBM computer systems greater data-handling power for database applications and batch processing. It was initially used on two large System/360 processors, the Model 85 and Model 195, and later used with the System/370 Model 155 and Model 165.

... snip ...

the 2305 needed 2880 channel that was available on 360/85 & 360/195
https://en.wikipedia.org/wiki/IBM_System/360

It would take some sort of special feature/option to get the 2305 controller on 360/65 channels or a 2880 channel on 360/65?

the 2305 transfer rate was 1.5mbyte/sec and controller features that took advantage of 2880 (there was also 2305 with half the capacity and 3mbyte/sec transfer)

the 2301 "drum" had nine 4k pages formated across a pair tracks ... optimized channel transfer could achieve nearly nine 4k page transfers per two revolutions and at 60 revs/sec ... comes out to 270pages/sec or slightly over mbyte/sec.

intel provided the 2305-compatible 1655 ... which in total cost nearly 30% of space doing CKD emulation ... lack of MVS support for native FBA has resulted in all sort of issues with emulated CKD over the decades (there have not been *real* CKD for some years, all current CKD are emulated)
http://groups.google.com/group/bit.listserv.ibm-main/browse_thread/thread/c5fb88afc2b05a63#

Date: 08/05/82 16:17:32
From: wheeler

re: intel drums; Native mode operation has the same performance as 2305 simulation ... not faster, no slower.

However, in native mode all 12meg worth of drum is used as data blocks. In 2305 simulation mode, only that amount of formated space is used for data blocks. VM uses a format which only utilizes approx. 9.5meg worth of data blocks (the rest is inter-record gaps and dummy block spacers to optimize slot sorting). The result is native mode represents about a 30% increase in drum space (an intel 1655 box with 4 simulated 2305s becomes the equivalent of 5.3 2305s in native mode).

Intel has been saying they would have a 3meg. data streaming option available by August. That would mean twice the data transfer rate compared to either a real 2305 or an intel simulated 2305. I haven't confirmed it, but it was my understanding that 3meg. data streaming would be available for either 2305 or native mode.

SJRLVM1, SJEVM5, and at least one machine in STL are running 1655s (48 meg./4 drum) in 2305 mode. They are all 1.5meg. versions. In addition, SJRLVM1 has a data streaming STC 2-drum electronic device (3 megabytes) ... & the STC drums don't have a native mode option. We also have a combination of real 2305s and 3380s and are in the process of running various performance comparisons.

Note: at 1.5meg. mode, an electronic drum has the same maximum thru-put capacity as a 2305 drum ... under VM at maximum load, there are long CCW chains transfering multiple page requests in one SIO operation. The data transfer is the same, so the electronic drums don't buy anything there. It is in the area of average access time that electronic drums improve performance. A 2305 drum has a 5 milliscond avg. rotational delay (access delay) per SIO. An electronic drum has avg access delay of 300-400 microseconds (approx. 1/50th of a 2305). Time to transfer one page is approx. 2.7 milliseconds for either devices at 1.5meg. transfer. For long CCWS chains with one rotational delay per 20-30 pages transferred performance is about the same:


chain      2305   stc/intel@1.5 stc/intel@3
 size     elapsed   elapsed      elapsed

 1 page   7.7mills  3.0mills    1.6mills
 2 page  10.4mills  5.7mills    2.9mills
 5 page  18.5mills 13.9mills    7 mills
10 page  32 mills  27.4mills   13.9mills
20 page  58 mills  54.4mills   27.4mills

On a moderately loaded, page bound system, electronic drums can significantly improve the paging performance.

... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet pioneer Paul Baran

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Internet pioneer Paul Baran
Newsgroups: bit.listserv.ibm-main
Date: 28 Mar 2011 11:14:43 -0700
mike.a.schwab@GMAIL.COM (Mike Schwab) writes:
Internet pioneer Paul Baran passes away, March 28, 2011. Designed Packet switching that was incorporated into Arpanet in 1969 later IP.
http://www.bbc.co.uk/news/technology-12879908

After reading that, I found interesting this article, Celebrating 40 years of the net , Oct 29, 2010.
http://news.bbc.co.uk/2/hi/technology/8331253.stm

And I saw this article: Alan Turing designed the Ace computer, which did computations while also keeping track of the accuracy, Feb 5, 2011.
http://news.bbc.co.uk/2/hi/technology/8498826.stm


note that the corporate internal network was larger than the arpanet/internet from just about the beginning until late '85 or early '86.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Big change came with the switch-over from host/IMPs on 1jan83 to tcp/ip ... and started to see workstations and PCs as network nodes (while communication group was severely restricting workstations and PCs to terminal emulation). misc. past posts mentioning efforts preserving terminal emulation paradigm
https://www.garlic.com/~lynn/subnetwork.html#emulation

At the time of the 1Jan83 switch-over ... arpanet/internet had something like 100 IMP network nodes with approx. 250 connected hosts. At that time, the internal network was approaching 1000 hosts/nodes reached a few months later. misc. old email mentioning internal network
https://www.garlic.com/~lynn/lhwemail.html#vnet

some of this discussed in (linkedin) Greater IBM group discussion about the NSFNET backbone:
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011d.html#66 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#6 IBM100 - Rise of the Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet pioneer Paul Baran

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Internet pioneer Paul Baran
Newsgroups: bit.listserv.ibm-main
Date: 28 Mar 2011 13:09:11 -0700
Efinnell15@AOL.COM (Ed Finnell) writes:
Was it Walt Dougherty(sp) that used to give the networks update at SHARE early eighties? Started with a two page binder and mid-eighties was about an inch and half of Fanfold...

re:
https://www.garlic.com/~lynn/2011e.html#76 Internet pioneer Paul Baran

this is old post that contains announcement in 1983 for the 1000th node on the internal network ... as well as a couple samples of other 1983 new node announcements
https://www.garlic.com/~lynn/99.html#112

the majority of the internal nodes had always been VM ... but starting in the late 70s there was an explosion in the number of vm/4341 nodes.

this post has samples of 1983 new node announcements ... as well as the list of all (world-wide) locations that had new nodes added during 1983
https://www.garlic.com/~lynn/2006k.html#8
and followup post
https://www.garlic.com/~lynn/2006k.html#43

in the 70s they used some layout software to print nodes and connections (when it was few hundred). printed on back of green-bar fanfold 1403/3211 ... boxes and connecting lines. Old post about (still) having one printed on 15apr1977 at HONE1 (in box some place)
https://www.garlic.com/~lynn/2002j.html#4

this ibm-main mailing list originated on bitnet (& earn) which was corporate sponsored network of higher educational institutions ... using similar technology to that used in the internal network
https://www.garlic.com/~lynn/subnetwork.html#bitnet

one of the issues was that the (customer) vnet/rscs drivers quickly become restricted to just the NJI family of drivers ... which were much less efficient than the vnet/rscs native drivers ... that continued to be used internally ... at least up until the internal network switch-over to SNA in the late 80s. There was all sorts of resistance to converting the internal network (to sna/vtam) and so the communication group had large campaign to drive it through ... including telling top corporate executives things like PROFS was a VTAM application (as part of justification).

a number of recent posts mentioning san/vtam misinformation activity in the late 80s:
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#34 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#43 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet pioneer Paul Baran

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Internet pioneer Paul Baran
Newsgroups: bit.listserv.ibm-main
Date: 28 Mar 2011 14:16:08 -0700
lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
the majority of the internal nodes had always been VM ... but starting in the late 70s there was an explosion in the number of vm/4341 nodes.

re:
https://www.garlic.com/~lynn/2011e.html#77 Internet pioneer Paul Baran

There a huge number of JES/NJE issues. For some time the code carried "TUCC" identifier in assembler source from HASP days. For node definition, it took unused slots in the HASP 255 entry psuedo device table. Normal installation could have 60-100 psuedo devices ... leaving a maximum off 160-200 entries for defining network nodes.

By the time the NJE software shipped to customers there were more nodes than could be defined in NJE (VNET had a totally different native implementation that had an enormously larger limitation related to number of network nodes). NJE software also would discard any traffic where it didn't have either the origin or destination node defined (even if it knew how to deliver the traffic, if it didn't have definition for the origin, it would still discard).

sometime after the internal network passed 1000 nodes, NJE was enhanced to handle 999 nodes ... and after the internal network node passed 2000 nodes, NJE was enhanced to handle 1999 nodes.

Another problem was that NJE jumbled networking and job control fields ... and incompatibilities between two different NJE releases could result in crashing MVS. As a result, a large library of VNET/RSCS drivers grew-up ... that would do canonical conversion of NJE header information ... with specific driver being started in VNET/RSCS being started that corresponded to the release level of JES/NJE on the other end of a link (as a countermeasure to keep MVS from crashing).

A combination of these problems restricted JES systems to boundary nodes ... with VM handling core networking operation (and the majority of all nodes). There is the infamous scenario where VNET/RSCS NJE driver wasn't updated and started ... with traffic from a MVS/JES system in San Jose resulting in MVS/JES system in Hursley crashing (and management blaming VNET/RSCS for NOT keeping MVS from crashing).

misc. past posts mentioning HASP, JES, &/or NJE networking
https://www.garlic.com/~lynn/submain.html#hasp

a footnote on the conversion to SNA/VTAM ... given the enormous resources that were pumped into the effort ... it would have been much more efficient to have converted RSCS/VNET to tcp/ip ... rather than SNA/VTAM. for the fun of it ... from IBM Jargon:
notwork - n. VNET (q.v.), when failing to deliver. Heavily used in 1988, when VNET was converted from the old but trusty RSCS software to the new strategic solution. To be fair, this did result in a sleeker, faster VNET in the end, but at a considerable cost in material and in human terms. nyetwork, slugnet

slugnet - n. VNET (q.v.) on a slow day. Some say on a fast day, and especially in 1988. notwork, nyetwork


... snip ...

some bitnet history:
https://en.wikipedia.org/wiki/BITNET
http://www.livinginternet.com/u/ui_bitnet.htm

above mentions BITNET II about the time of NSFNET backbone ... again it would have been much better if the internal network cutover had gone to tcp/ip than SNA/VTAM. old email regarding various aspects of NSFNET backbone
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

and old email about getting EARN going (effectively BITNET in Europe):
https://www.garlic.com/~lynn/2001h.html#email840320
in this post
https://www.garlic.com/~lynn/2001h.html#65

--
virtualization experience starting Jan1968, online at home since Mar1970

I'd forgotten what a 2305 looked like

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Mar, 2011
Subject: I'd forgotten what a 2305 looked like
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011e.html#75 I'd forgotten what a 2305 looked like

I had done page migration on cp67 (2301->2314) and was one of the things that moved over to vm370 ... see this description of csc/vm on release 2 plc9

https://www.garlic.com/~lynn/2006w.html#email750102

The page migration was included along with a bunch of other stuff that shipped in my resource manager ... against vm370 release 3 plc4.. Reproduced initial resource manager blue letter:
https://www.garlic.com/~lynn/2001e.html#45

purely based on device type. Somewhere along the way, SYSORD specification was put in to have device-level finer control. I then redid the whole thing with SYSPAG specification that was on area level ... and redid a bunch of other stuff. It was then possible to remove device areas from allocation structure while leaving it on deallocation structure. Then issue a "migrate" command to move all page & spool records off the device ... allowing device to be taken offline.

One of the other issues was possibly have a relatively small 3380 area behind 3880-11/3880-21 controller "page cache" ... and then other areas that did cache bypass (like for spooling).

old email about SYSPAG changes were to ship in HPO3.4
https://www.garlic.com/~lynn/2011c.html#email860119

The cp67->.CSC/VM references (and the resource manager blue letter) also mentions "swaptable" migration. I had created an abbreviated virtual memory table for each utable/vmblok and could copy user specific kernel storage control blocks into this virtual address space ... and then allow it to be paged in and out.

So in addition to drum->disk (high-speed to low-speed) page migration ... I could also migrate/page control tables/blocks. Virtual address table was a "segment table" which had a pointer to each pagetable (for each segment, the segment table entry also had valid/invalid flag). Contiguous following the pagetable was swaptable ... one entry for each page ... which included various status flags for each virtual page, the virtual storage keys, and the location on secondary storage for paging. For segments that appeared to be inactive ... all of the segment's virtual pages could be migrated, all the corresponding swaptable copies to the vmblok's stub virtual memory and the page&swaptable storage deallocated, and the corresponding segment table entry cleared and the invalid flag set (reducing the real storage footprint for user's that had gone inactive).

this email
https://www.garlic.com/~lynn/2006v.html#email731212

references getting two BU co-op students to help me with the cp67->vm370 migration. One of the students then graduates and joins IBM YKT. The other student goes to work for a vm370 (originally cp67) commercial time-sharing company in the Cambridge area.

Some number of the virtual machine based online, commercial time-sharing services had done loosely-coupled (cluster) support. HONE had done this after the consolidation of the US HONE datacenters in silicon valley in the last half of the 70s (US HONE was possibly the largest "single-system-image" cluster operation at the time, allowing load-balancing and fall-over as part of cluster operation).
https://www.garlic.com/~lynn/subtopic.html#hone

At the service bureau, the former BU co-op extended the swaptable migration to include all control blocks for a user ... this allowed the ability to schedule a processor to be taken offline (like for PM) and non-disruptive migration of all users to some other processor in the complex. HONE was primarily work-week operation so it was relatively easy to schedule downtime for things like preventive maintenance on weekends. However, some of the commercial online time-sharing services were getting into 7x24 operation ... and be able to provide for non-disruptive adding/removing hardware was becoming increasingly important.

--
virtualization experience starting Jan1968, online at home since Mar1970

Which building at Berkeley?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Which building at Berkeley?
Newsgroups: alt.folklore.computers
Date: Mon, 28 Mar 2011 21:04:02 -0400
jolomo@gmail.com (Joe Morris) writes:
I might have a chance to visit the UC Berkeley campus this year. Anybody know which building(s?) Mr. Joy, Mr. McKusick and the rest did most of their work in? Hopefully the buildings are still around

I'm sure there's probably a giant "vi" historical marker, right?


was there 2yrs ago for Gray's celebration:
http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

eecs history
http://www.eecs.berkeley.edu/department/history.shtml

bsd done by csrg
https://en.wikipedia.org/wiki/Computer_Systems_Research_Group
bsd history
http://oreilly.com/catalog/opensources/book/kirkmck.html
http://ewh.ieee.org/sb/karachi/pnec/osh/bsd.html

mentions joy office 4th flr evans hall:
http://www.salon.com/technology/fsp/2000/05/16/chapter_2_part_one/print.html

also mentions computer room, evans hall and sitting around cory hall

evans hall (following includes quote about VI):
https://en.wikipedia.org/wiki/Evans_Hall_%28UC_Berkeley%29

EECS in cory & CS in soda
http://www.eecs.berkeley.edu/Directions/

Evans in above bottom of map

large map
http://berkeley.edu/map/maps/large_map.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Collection of APL documents

From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Mar, 2011
Subject: Collection of APL documents
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#72 Collection of APL documents

Date: 01/17/79 10:41:48
From: wheeler

I've been thru jjjjjj once. I had asked to get a copy of their public workspaces (APLSV) to start looking at the conversion effort. I 1st was directed to jjjjjj and then jjjjjj's manager. They were very concerned that I'd be moving APL service off of their machine onto someother and they would loose customers. (the research system also runs on two 3031s in the engineering labs and anything I do here shows up over there).

after I sent that message to you, I finally found the IBM APL public librarian in Sterling F. He sent me about 100 'APLSV' workspaces which have been converted to VSAPL. They are now also available in the engineering labs, and it looks like jjjjjj is beginning to loose some of his APLSV customers. Since we installed VM in the engineering labs, I've been accused of trying to subvert the GPD division.


... snip ... top of post, old email index

"jjjjjj" was running an MVS APLSV service in the disk division datacenter.

I had rewritten IOS so that it was bullet proof and never fail so they could operate the disk development test machines in operating system environment (allowing on-demand, concurrent anytime testing instead of the dedicated, pre-scheduled, around-the-clock, stand-alone testing; at one point they had tried using MVS but found it had 15min MTBF in that environment).

misc. past posts getting to play disk engineer in bldgs. 14 & 15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

What is your most memorable Mainframe security bug, breach or lesson learned?

From: lynn@garlic.com (Lynn Wheeler)
Date: 29 Mar, 2011
Subject: What is your most memorable Mainframe security bug, breach or lesson learned?
Blog: Mainframe Exports
During the future system period in the 70s, there was a corporate effort for specially secured internal online vm370 systems so that future system architecture documents could only be viewed on local, channel attached 3270s (with no provisions for hardcopy, "real" 3270s, before terminal emulation, downloading, etc). This was somewhat in response to architecture document for 370 virtual memory leaking to some industry publication (before the announce of 370 virtual memory). Another response (to 370 virtual memory document leaking) had been to retrofit all corporate copying machines with unique serial number that would appear on all pages copied.

Anyway, I was scheduled to have weekend dedicated time in one of the machine rooms containing such an operation. I stopped by friday afternoon to make preparations and they were so proud of their specially enhanced system that they had to show it off and proclaim that even I couldn't break it (even if left in the machine room alone). I'm sorry to say I rose to the bait and said it would take less than five minutes; most of it disabling all external access to the machine. Then I proceeded to modify a kernel storage byte (from the front console) ... which turned a branch on incorrect password to a no-op (making everything entered "valid"). I observed that the countermeasures would require (at least) encrypted filesystem and some sort of authentication process for using the machine console.

misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

History of APL -- Software Preservation Group

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 Mar, 2011
Subject: History of APL -- Software Preservation Group
Blog: Greater IBM
Anybody aware of surviving HONE APL applications?

There was post to "IBM Historic Computing" that the computer history museum's software preservation group is doing some work on APL.
http://www.softwarepreservation.org/projects/apl

In the discussion I mentioned HONE was possibly the largest APL service ... several posts in the thread:
https://www.garlic.com/~lynn/2011e.html#58 ,
https://www.garlic.com/~lynn/2011e.html#63 ,
https://www.garlic.com/~lynn/2011e.html#72

They contacted the APL group but the group was unaware of any (surviving) HONE APL applications

from IBM Jargon:
HONEhead - n. One of a select few in the Branch Office who, through the use of the office HONE (Hands-On Network Environment) terminal, can always find the answer to even the most obscure question. The first symptom usually noted is frequent missing of lunch to scrounge for new Product announcements on the system. Hard cases have at least one userid on every HONE machine in the network. n. A member of the HONE system support staff who believes that the answer to every question should be on the HONE system, and that there should be a minimum of five menus associated with finding any answer.

... snip ...

recent thread about having used configurator for 3725 ... and trying to use absolute best possible 3725 numbers ... in comparison with Series/1. I then was taking lots of heat from upper executives about the comparison "being wrong" ... even though I had spent several months passing it around all sorts of technical experts for review/vetting. old email
https://www.garlic.com/~lynn/2011e.html#email870218
in this post about sna/vtam misinformation
https://www.garlic.com/~lynn/2011e.html#32

I had hobby of shipping & supporting enhanced operating systems for internal datacenters ... HONE being long time "customer" dating back to its early days using cp67 ... HONE had also asked me to do some number of the early overseas HONE clones for them ... one of the first was when EMEA hdqtrs moved to Paris.

Part of (80s) 3725 comparison presentation that I made to the SNA architecture review board (ARB) in Raleigh:
https://www.garlic.com/~lynn/99.html#67

The communication group had such a stranglehold on the datacenter that a senior disk engineer got a talk scheduled at an internal, world-wide, annual communication group conference and opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication products so throttled access to the datacenter ... that huge amounts of data was starting to lead out of (flee) the datacenter to more distributed computing friendly platforms. The disk division could see the leading edge of this in the drop in disk sales. The disk division had come up with a number of products to solve the problem, but since the communication group had strategic ownership of everything that crossed the datacenter walls ... they were constantly able to block introduction.

The effects continue to accelerate leading to period in the 90s that was predicting the demise of the mainframe. twenty-some years later, there is now progress with what the disk division had been trying to do in the 80s (and the disk division demise has come to pass).

misc. related past posts
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 Mar, 2011
Subject: New job for mainframes: Cloud platform
Blog: Greater IBM
New job for mainframes: Cloud platform
http://www.computerworld.com/s/article/9214913/New_job_for_mainframes_Cloud_platform

from above:
As companies take steps to develop private clouds, mainframes are looking more and more like good places to house consolidated and virtualized servers. Their biggest drawback? User provisioning is weak.

... snip ...

linkedin open group:
http://lnkd.in/cNF-uZ
http://lnkd.in/F6X_3Y

I've repeatedly mentioned that virtual machine based cloud operations go back to the 60s ... some past posts
https://www.garlic.com/~lynn/submain.html#timeshare

... and the largest such operation in the 70s & 80s was internal, world-wide sales & marketing support HONE system. misc. past posts mentioining HONE
https://www.garlic.com/~lynn/subtopic.html#hone

CP67 did a couple things in the 60s to help open up 7x24 operation.

At the time, mainframes were leased and had monthly shift charges based on the processor meter (which ran whenever the processor &/or channels were active). Early deployments tended to have relatively light off-shift usage. One of the tricks was a terminal channel program sequence that left the line open to accept incoming characters but wouldn't run the channel (& processor meter) when no characters were arriving.

Another was significantly improving operator-less/dark-room off-shift operation to minimize operation costs during light off-shift operation.

CP67 was enhanced to automatically take a "dump" (to disk) and re-ipl/re-boot after a failure ... coming back up and available for service. One of the issues was that the growing number of service virtual machines (virtual appliances) still required manual restart. I then did the "autolog" command, originally for automatic benchmarking (could run large number of unattended benchmarks with system reboot between operation) ... discussed here:
https://www.garlic.com/~lynn/2010o.html#48

It then started being used for automatic startup of service virtual machines ... and after conversion from cp67 to vm370 ... the product group then picked up a number of CSC/VM features for VM370 release 3. old email refs:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750827

This (recent) post in (linkedin) IBM Historic Computing group discusses support for loosely-coupled (cluster), single-system-image, load-balancing & fail-over support done during the 70s by a number of large virtual-machine-based service operation (including HONE). Also, in the 70s, at least one virtual-machine based online commercial service bureau provided for migrating active users between processors in loosely-coupled (cluster) configuration ... supported non-disruptive removal of processor in cluster for things like scheduled downtown for preventive maintenance.
https://www.garlic.com/~lynn/2011e.html#79

In the mid-70s, the internal US HONE datacenters had been consolidated in silicon valley. Then in the early 80s, somewhat in response to earthquake ... the HONE cluster support was extended with a replicated datacenter in Dallas and then a 3rd in Boulder.

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Wed, 30 Mar 2011 16:11:56 -0400
jmfbahciv <See.above@aol.com> writes:
Our lawyers wanted us to print the copyright on the TTY, IIRC, at login time. Some of their ideas got really bizarre. We told them absolutely not. I don't know if that changed their minds or if somebody else began thinking aobut all the TTY screens on all customer sites suddenly claiming everything as DEC's.

think of a system coming up and every little piece of separate code issuing a copyright on the CTY or every RUN doing TTY typeout. Data enterers would have a sit-in in Maynard.


Internal VM370 logon screen ... sample here for hyperchannel channel extender effort that I did for the IMS group
https://www.garlic.com/~lynn/lhwemail.html#oldpicts
old post/reference
https://www.garlic.com/~lynn/2008m.html#20

got text added saying something like "for official business purposes only".

we managed locally to get it changed to "for management approved use only" (aka local manager could dictate w/o having to get corporate ruling)

the distinction being related to things like use of demostration programs (like ADVENTURE).

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Thu, 31 Mar 2011 09:25:09 -0400
jmfbahciv <See.above@aol.com> writes:
I actually opened some old DEC manuals yesterday. I had forgotten that we had also included a long paragraph about not guaranteeing that the manual's contents refllected the software. This was done becuase publishing manuals could lag software ships by as much as 3 or 4 years. I worked very hard (and others did) over 10 years to close that window. My first step was to get everything in machine-readable bits.

one of the first mainstream manuals moved to cms script was principles of operation. the "full" manual was the "architecture redbook" (for red 3-ring binders it was distributed in). The redbook was twice as large as the POP ... with the POP sections intermixed with the corresponding architecture sections (including discussions of alternatives and justification for what was chosen). command line options would select whether the whole manual was generated or just the POP subsections. It was somewhat easier to keep the POP subsections up-to-date since they were co-located with their architecture sections.

various POPs
http://www.bitsavers.org/pdf/ibm/360/princOps/
http://www.bitsavers.org/pdf/ibm/370/princOps/
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/CCONTENTS

cms script was evolution from CTSS runoff with "dot" formating commands. then GML was invented at the science center in 1969 and GML tag support added to script (decade later GML morphed into ISO standard SGML, and then another decade morphed into HTML) ... misc. past posts
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

Scientists use maths to predict 'the end of religion' - Repost

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Scientists use maths to predict 'the end of religion' - Repost
Newsgroups: alt.folklore.computers
Date: Thu, 31 Mar 2011 13:44:46 -0400
greymausg writes:
Rudyard Kipling, "Oh ye who travel the narrow way By tophet flare to judgement day be gentle when the heathen pray to Buddha at Kamakura" (From memory, may be in error)

... for even more topic drift ... we were introduced to Kamakura in the 90s when their offices were still in state-sponsored "incubator" (converted school building). one of the founders supposedly was involved in the analysis from the late 80s that citi's adjustable rate mortgage portfolio could take down the institution (prompting unloading the portfolio, getting out of the mortgage business and needing a private bailout to stay in business). recent reference:
https://www.garlic.com/~lynn/2011e.html#41 On Protectionism
which references this long-winded post from early '99
https://www.garlic.com/~lynn/aepay3.htm#riskm

they have since moved into (real?) office bldg downtown as well as offices elsewhere.
http://www.kamakuraco.com/

Karmakura raised several warnings during the economic bubble about what was going on.

--
virtualization experience starting Jan1968, online at home since Mar1970

Would mainframe technology be relevant in the age of cloud computing?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Apr, 2011
Subject: Would mainframe technology be relevant in the age of cloud computing?
Blog: Mainframe Experts
Mainframe iron was workhorse of 60s, 70s, & much of the 80s ... including earlier flavor of "cloud computing" (mentioned upthread). In the late 80s, the communication group was becoming a major inhibitor with the stranglehold it had on the datacenter (owned strategic responsibility for everything that crossed the datacenter walls). At that time, a senior disk engineer got a talk scheduled at the world-wide, annual, internal communication group conference and open with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw the leading edge of data fleeing the datacenter (to other platforms that were significantly more distributed computing friendly) because of the communication group stranglehold on the datacenter. The disk division had come up with several products to correct the situation, but they were constantly shutdown by the communication group (owning strategic responsibility for everything that crossed the datacenter walls).
https://www.garlic.com/~lynn/subnetwork.html#emulation

In that timeframe, the communication group also had internal program of misinformation ... part of the campaign to convert the internal network (which had been larger than arpanet/internet from just about the beginning until sometime late '85 or early '86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... to sna/vtam ... were things like claiming PROFS was a VTAM application (at least that was one of the things being told the upper executives). There was also quite a bit of words being spread about applicability of SNA/VTAM for the NSFNET backbone (the "operational" precursor for the modern internet).
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
and
https://www.garlic.com/~lynn/subnetwork.html#internet

Recent mainframe has made enormous progress since that period ... but it has a couple decades of past reputation to overcome.

I had project I called HSDT which was having some equipment built on the other side of the pacific. The friday before a visit, Raleigh sent out an announcement for a new (online) "high-speed" discussion group which included the following definitions:


low speed:       <9.6kbits
medium-speed:    19.2kbits
high speed:      56kbits
very high speed: T1 (1.5mbits)

the following monday, on conference room wall (on the other side of the pacific):

low speed:       <20mbits
medium speed:    100mbits
high-speed:      200-300mbits
very high speed: >600mbits

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

the original mainframe tcp/ip implementation had some issues, potentially consuming a 3090 processor getting 44kbytes/sec sustained throughput. I did the enhancements to the product for RFC1044 and in some testing at Cray research got sustained channel thruput beteween a Cray and 4341 using only modest amount of the 4341 cpu (possibly a 500 times improvement in the instructions executed per byte moved).

misc. past posts mentioning doing rfc1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe passwords synced to active directory

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Mainframe passwords synced to active directory.
Newsgroups: bit.listserv.ibm-main
Date: 4 Apr 2011 08:20:36 -0700
mellonbill@YAHOO.COM (Bill Johnson) writes:
We are trying to sync up (and expand) our mainframe passwords to match what the user has in active directory. So far so good. The problem is when the AD password is longer than 8 characters. Anyone shed some light as to how this can be handled?

active directory trivia ... based on kerberos
http://technet.microsoft.com/en-us/library/bb742516.aspx

original implementation for active directory was done under contract by one of the companies providing commercial kerberos products.

over the years ... active directory drifted from kerberos base ... some discussion on interoperability
http://www.centrify.com/blogs/tomkemp/integrating_mit_kerberos_with_active_directory.asp

kerberos
https://en.wikipedia.org/wiki/Kerberos_%28protocol%29

part pf MIT's project athena
https://en.wikipedia.org/wiki/Project_Athena

with joint funding by DEC and IBM to the tune of $25M each. started in the early day's of IBM's ACIS and getting much more active with universities. we use to drop by Project Athena periodically as part of corporate review of what was going on (was there for early discussions on how multiple relm interoperability would work).

article about kerberos on mainframe (seamless interoperability with RACF)
http://www.mainframezone.com/it-management/kerberos-on-z-os-teaching-an-old-dog-new-tricks/P2

much later at presentation for a SAML product multi-relm deployment (coalition forces) ... and happened to observe/mention that SAML messages & message flows look nearly the same as Kerberos (with the format of the message contents being XML)
https://en.wikipedia.org/wiki/SAML_2.0

the speaker was somewhat defensive saying that there are only a limited number of ways to do multi-relm implementation.

--
virtualization experience starting Jan1968, online at home since Mar1970

PDCA vs. OODA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Apr, 2011
Subject: PDCA vs. OODA.
Blog: Boyd Strategy
How do you compare Plan-Do-Check-Adjust, PDCA, with Observe-Orient-Decide-Act, OODA?

Putting "PLAN" first, brings up connotations of MBA programs and large consulting houses with formula methodology ... like plan-for-a-plan ... can do a plan w/o first needing any information; which makes me biased in favor of observe-orient-decide ... before act.

for other topic drift: Making decisions is the third way we learn, research shows
http://www.physorg.com/news206005936.html

there is this: Study examines how brain corrects perceptual errors
http://www.physorg.com/news/2011-03-brain-perceptual-errors.html

and just now: What the brain saw
http://www.physorg.com/news/2011-03-brain.html

my bias against various management methodologies also shows up in this recent post in (linkedin) "Greater IBM" (current & former IBMers) ... (references "fast track", to be or to do, "mongolian hordes")
https://www.garlic.com/~lynn/2011d.html#12

bias tended to be because of (frequently "fast-track") executives with formula, school-taught PLANs that sprang from nothing; no observe (data/information) and no orient (understand).

PDCA at least has backend with follow-up (possibly not everything iterates), folklore that common trait of successful silicon valley startups that they had completely changed their business plan at least once in the first two years. Criticism of inadequate followup/backend has also been used to obfuscate lack of front-end observe&orient input into PLAN.

I sponsored Boyd's briefings at IBM in the 80s and we talked some about OODA-loop application to business. Meta-OODA-loop where "decide" maps to PLAN ... and then daily or moment-by-moment micro-OODA-loops, somewhat analogy to PDCA having OODA-loops within each phase.

My other bias was having done a lot of dynamic adaptive, iterative/feedback computer resource management algorithms as undergraduate in the 60s (shipped in IBM products even before I graduated). Continued through 70s and 80s, I observed lots of technologies that made decisions based on point event w/o adequate information and context; including point events that had low correlation with the matter at hand. I had to start off with extensive changes for adequate instrumentation (observe) and then figuring out context for what was measured (orientation) ... before laying out framework for decisions.

one of steele's recent with some quotes about needing information
http://www.phibetaiota.net/2011/03/reference-at-what-cost-intelligence-on-ethics/

My connotation with "PLAN" has been "point" event (aka "fall" plan). I've seen "framework" be used much more for continuous (although it seems to cycle in & out of favor). One could contend that business schools and graduates have enormous vested interest in "PLAN" paradigm ... so they would tend towards adding agile & continuous characteristics to "PLAN" (as opposed to switching to something else).

For an OODA-loop framework, military tactics operates within strategy framework.

I had participated some in the early 90s US auto industry C4 taskforce ... about remaking themselves to deal with foreign competition. US had traditionally been on 7-8yr product cycle (cosmetic changes in new model yrs, sometimes two different overlapped product cycles offset by 3-4yrs). They highlighted that foreign competition had first gone to 3-4yr cycle, had dropped to 18month cycle and were in the process of dropping below traditional annual model yr cycle. I had (also) periodically visited their consumer electronic manufacturers in the 80s ... and had already seen their product cycles drop to 90days (well below traditional annual product cycle). The C4 taskforce meetings spelled out everything that had to be done to respond ... but as events showed they were unable to achieve those objectives (for at least another two decades, in part because of the enormous vested interests).

misc. past Boyd references:
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Fresher

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Mainframe Fresher
Newsgroups: bit.listserv.ibm-main
Date: 4 Apr 2011 15:37:33 -0700
steve@TRAINERSFRIEND.COM (Steve Comstock) writes:
But I am curious as to why the mainframe doesn't just go away: there must be one or more z/OS applications that the Windows folks just can't beat. Can you describe what applications are keeping the mainframe around? And why Windows folks can't make it go away?

lots of online/real-time stuff in the 70s & 80s was adding frontend that started an operation ... but left it to (existing, frequently cobol) legacy batch operation to complete (/settle) ... moved to overnight batch window.

in the 90s, the overnight batch window was becoming major bottleneck ... globalization both increasing workload ... as well as pressure to significantly decrease length of the overnight batch window.

in this period, some number of institutions spent billions on business process reengineering that would leverage massive parallelization and "killer micros" to implement straight through processing (running each operation straight through to completion & eliminating need for overnight batch window). however, it turned out that they used some technology that wasn't adequately vetted ... and going into deployment they found that it had overhead 100 times that of the cobol batch (and wouldn't scale) ... totally swamping anticipated (parallel) throughput improvements.

the resulting failures left huge scars on the industry and stalled reengineering efforts for possibly decades. I was involved in taking a whole new generation of parallelization to some industry bodies a couple years ago ... and while it initially met very positive acceptance ... as it moved up individual institutions ... it met quite a bit of resistance ... apparently even a decade later ... the scars from the failures were still fresh.

misc. posts mentioning overnight batch window &/or straight through processing:
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#42 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#45 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

--
virtualization experience starting Jan1968, online at home since Mar1970

PDCA vs. OODA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Apr, 2011
Subject: PDCA vs. OODA.
Blog: Boyd Strategy
re:
https://www.garlic.com/~lynn/2011e.html#90 PDCA vs. OODA.

Boyd would tell that big problem with American business was the ww2 ("young") army officers coming of age (as executives) in the business world. Going into WW2, US had to deploy large numbers with little or no training ... so to leverage the few skilled resources available ... a rigid, top-down command & control structure was created. US win strategy in WW2 was rigid, top-down command&control of massive overwhelming resources.

analogy from IBM Jargon:
Mongolian Hordes Technique - n. A software development method whereby large numbers of inexperienced programmers are thrown at a mammoth software project (instead of deploying a small team of skilled programmers). First recorded in 1965, but popular as ever in the 1990s.

... snip ...

Inside IBM, the fall plan would include resource allocation for internal development ... including things like 3270 terminals.

I had gotten blamed for online computer conferencing in the late 70s & early 80s, on the internal network (larger than the arpanet/internet from just about the beginning until possibly late '85 or early '86) and one of the frequent discussions was the lack of resources for internal developers ... which somewhat culminated in tome by a departing employee titled "MIP Envy" ... a version
https://www.garlic.com/~lynn/2007d.html#email800920

So about the time, there was a small uptick in the fall plan (annual) 3270 terminal allocation (needed by internal developers to do their job) ... and then there came a rapidly spreading corporate rumor that some of the top executives had started using PROFs to communicate. As a result there was a mad rush by nearly all of corporate middle-management to acquire a 3270 terminal for their desk (and a PROFS ID) ... effectively as a status symbol. Most of them never actually used the terminal (they had secretaries for that) ... but it did manage to co-op the majority of that year's 3270 terminal allocation (that had been intended for internal developers).

from IBM Jargon
PROFS - profs n. Professional Office System. A menu-based system that provides support for office personnel such as White House staff, using IBM mainframes. Acclaimed for its diary mechanisms, and accepted as one way to introduce computers to those who don't know any better. Not acclaimed for its flexibility. PROFS featured in the international news in 1987, and revealed a subtle class distinction within the ranks of the Republican Administration in the USA. It seems that Hall, the secretary interviewed at length during the Iran-Contra hearings, called certain shredded documents PROFS notes as do IBMers who use the system. However, North, MacFarlane, and other professional staff used the term PROF notes. v. To send a piece of electronic mail, using PROFS. PROFS me a one-liner on that. A PROFS one-liner has up to one line of content, and from seven to seventeen lines of boiler plate. VNET

... snip ...

trivia question: what is the classification level of (executive branch) PROFS backup tapes? ... when they might have every possible known (hint: this can come into play when the legislative branch subpoenas all PROFS notes related to specific topics)

--
virtualization experience starting Jan1968, online at home since Mar1970

Itanium at ISSCC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Itanium at ISSCC
Newsgroups: comp.arch
Date: Tue, 05 Apr 2011 07:57:06 -0400
Quadibloc <jsavard@ecn.ab.ca> writes:
and I may have misunderstood it. It certainly _is_ true that even if a mainframe is a lot like *one* kind of micro, the reliable database server, people wanting to do number-crunching, for example, don't use reliable database servers - they may use a single desktop micro, or they may use some type of compute cluster, but it won't be closely related to present-day mainframes.

But my point - that they are closely related to reliable database servers - means that it isn't clear to me that they are both "not going away" and in "a niche from which they will not emerge". Unless there are factors that make the competition between the two kinds of machine less than a fully direct competition.


there are some huge backend legacy database & processing operations ... that aren't particularly consumer facing. in the 70s & 80s, some number of these had front-end, online/real-time operations added that started some process ... but it was left to legacy batch operations to complete ... moved to the overnight batch window.

in the 90s, globalization and other things was putting severe strain on the overnight batch window ... increasing workload and pressure to reduce the length/size of the window. several institutions spent billions to re-engineering the overnight batch window with straight through processing ... leveraging "parallelizing" technology and large numbers of "killer micros". However, the technology wasn't very well vetted and many started having deployments before it was realized that the technology had 100 times the overhead (of cobol batch) and wouldn't scale ... totally swamping any anticipated throughput improvments that had been anticipated (leveraging large numbers of "killer micros").

the scars from these failures run deep. a couple years ago, I was involved in taking new generation of parallelizing technology to some industry groups ... which was accepted fairly well (for new round of straight through processing)... but as it moved up various member institutions, it was met with increasing resistance ... apparently the scars from the 90s failures may take decades to heal (or have those that experienced the failures replaced/retire)

in the early 90s, I was involved in some database operations involving large clusters of non-mainframe ... old reference
https://www.garlic.com/~lynn/95.html#13

and was asked to write a section for the corporate continuous availability strategy document ... which got pulled after both Rochester (as/400) and POK (mainframe) organizations complained (that they couldn't meet the objectives) ... which included support for redundant distributed operation (when I was out marketing in that period, I had coined the terms disaster survivability and geographic survivability). misc. past posts
https://www.garlic.com/~lynn/submain.html#available

decade earlier, Jim Gray had published study about outages shifting from hardware to other factors (software errors, people mistakes, environmental).
https://www.garlic.com/~lynn/grayft84.pdf

one of the large financial transaction processing operations had attributed their several year, 100% availability to: IMS hot-standby (replicated at geographic distances) and automated operator (eliminating human mistakes).

with regard to early 90s cluster scale-up ... some old email
https://www.garlic.com/~lynn/lhwemail.html#medusa

possibly within hrs of the last email in above:
https://www.garlic.com/~lynn/2006x.html#email920129

the scale-up was transferred and we were told we couldn't work on anything with more than four processors. a couple weeks later it was announced for scientific and technical (aka numerical/compute intensive) market *only*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
and then something about being caught by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

some more in this recent thread from (linkedin) "Greater IBM" (for current and former IBM employees)
https://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#24 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#29 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#40 IBM Watson's Ancestors: A Look at Supercomputers of the Past

and slightly older thread:
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time

so a lot has been done to "old" mainframe iron over the past 20yrs ... to better meet those objectives ... and some of the current situation could be as much because of vendor maneuvering.

misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

coax (3174) throughput

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: coax (3174) throughput
Newsgroups: bit.listserv.ibm-main
Date: 5 Apr 2011 05:15:41 -0700
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
What is the throughput of coax connection? I'm aware hat it's usually used for "hard" terminal connectivity, i.e. 3278 to 3174 and speed is simply good enough. However let's imagine PC with coax card connected to 3174 and IND$FILE transfer. What throughput can be expected?

comparison of 3277/3272 to 3278/3274 (3174 precursor). big dropoff in 3278 thruput was that a lot of the electronics in the 3277 head had been moved back into the (shared) controller (reducing 3278 terminal manufacturing costs) ... drastically increasing the chatter over the coax (and reducing thruput and response). This was also seen later in upload/download speeds using 3277 emulation cards versus 3278 emulation cards (because of difference in coax protocol/chatter):
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

3277 emulation had three times the upload/download thruput of 3278 emulation
https://www.garlic.com/~lynn/2009l.html#60 ISPF Counter

reference to possibly 15kbytes/sec
https://www.garlic.com/~lynn/2005r.html#17 Intel strikes back with a parallel x86 design

above are direct channel attached controllers.

for a little recent topic drift (not direct channel attach 327x controller)
https://www.garlic.com/~lynn/2011e.html#88 Would mainframe technology be releveant in the age of cloud computing?

--
virtualization experience starting Jan1968, online at home since Mar1970

VM IS DEAD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: VM IS DEAD
Date: 05 Apr 2011
Blog: VM
from long ago and far away, from somebody familiar with Rochester running HPO in XA/ESA mode.

Date: 89/06/05 20:14:15
To: wheeler

VM IS DEAD

V     V   MM   MM
V     V   M M M M
 V   V    M  M  M
  V V     M     M
   V      M     M

IIIIIII    SSSSS
   I       S
   I       SSSSS
   I           S
IIIIIII    SSSSS

DDDDDD    EEEEEEE    AAAAA    DDDDDD
D     D   E         A     A   D     D
D     D   EEEE      AAAAAAA   D     D
D     D   E         A     A   D     D
DDDDDD    EEEEEEE   A     A   DDDDDD

(and here's why)

First, let me give you my definition of "living" and "dead":

L_I_V_I_N_G                        D_E_A_D

1) Product is owned by users       Product is owned by marketing,
   looking out for users' needs    planning, and accounting looking
   out for shareholder's needs

2) Customer is a user              Customer is a CEO or a DP manager

3) Environment is tailorable,      Environment is restrictive and
   flexible and consistent         unconforming; consistent, but
   un-natural

4) Interfaces designed for the     Interfaces designed only for the
   novice and the "power user"     non-expert

5) Implementation is optimized     Implementation is optimized to
   for productivity (get a job     "do everything" (make everyone
   done well)                      happy)

6) Works with other tools          Works only in "integrated
   environments"

7) Works for you                   Thinks for you (and is
   consistently dumber)

8) Help is information which is    Help is cute and simplistic- what
   helpful                         you already know

 9) Documentation is concise        Documentation is pretty

10) Authors are people              Authors are anonymous development
                                    processes

11) Changes are smooth and contain  Changes are forced and sudden and
    fresh function                  contain little new function

12) Supported by owners and         Supported by in-betweens
    authors

13) Problems are handled directly   Problems are handled privately
    and publicly                    through paper, red-tape and
    service organizations

14) Gurus are climbing on           Gurus are bailing out

15) When something better comes     When something better comes along
    along, it gracefully steps      it becomes a competitor, holding
    aside                           back progress


Note: If some of these contrasts seem contradictory or inconsistent, you may be right. Just remember: real death cannot be articulated; it is something smelled, not seen.

Exhibit A- V T A M / V M

A local tool turned into a product. To the end user, no plusses over the old implementation. Slower response. Hogs CPU. VTAM "LOGOFF" via SYS REQ required often when the ENTER key used to do just fine. DP operators happy, but users sad.

Exhibit B- V M / X A

A change for the sake of change. Tools broke. Productive features disappeared. "I CMS" 8 to 10 times a day. Unreadable/Unreceivable VMDUMPs litter the reader. Performance loss on key facilities. Friendly tools no longer on speaking terms.

Exhibit C- I D S S

A software manufacturer's paradise. A software developer's ____. Performance improved to bad in later releases. Vendor serviced. "Integrated" environment. Hard-coded PF keys for robots. Panel happy. Command interface (for administrators). Works with other tools (2 or 3). Reader-happy. Everyone's happy; little gets done well. You conform to it.

Exhibit D- P R O F S

A paper shuffler's tool which shuffles the mind too. Unlike anything else. "Integrated" environment. Comes with its own face. Works with no other tools (closed interface). A PF key for every day of the week! Thinks for you and even knows when you press the wrong keys. You conform to it (managers use it).

Exhibit E- C A L L U P

A telephone directory for everyone with time to kill. Panel happy. PF key happy. Everyone's happy. PROFS users must use it so you better like it.

Exhibit F- X E D I T

A very powerful editor. Takes a 600 line profile and a minidisk full of macros to make it work.

Closing remarks:

My VM expertise is above average, yet I know there are many who knew it before me and there are many who have known it better.

My first encounter with VM was from a TSO background. The first weeks of work on VM can not be called work. It was more like war. No panels. No options to select. However, it didn't take long to see that what I first thought was the plague of the corporation was actually the steriod for productivity. Tools were everywhere. Experts were intense. If the tool wasn't available, you could roll your own. New features and functions arrived daily in boxcars filled with productivity toys.

Best of all, VM liked me. It listened to me and was able to change in accordance with what worked best for me. Not that I was strange, but that I was human, with different preferences and tastes and needs based on the work I was doing the most.

Things are different now. The experts are leaving. The IS help line is manned by some who don't know what "HX" is. VM tools aren't growing- they're just being maintained. Changes are thoughtless and premature. The ability to tailor tools is still a standard, but new tools are forgetting the standards.

I want off. Every new release is an attack on freedom. Every new tools is an attempt to bend my mind to the lowest common IQ denominator. Creativity is now only for the few; the rest are just supposed to exist- not excel.

I confess for us all that changes are difficult pills to swallow. Change is hard for everyone. No one diliberately make changes just to cause pain. I understand all that. But the changes which have come lately can't be swallowed. Even the meaning of change has changed. I used to use "change" to describe the difference between chilling winter and sunbeam spring. Now "change" means the difference between TELE and CALLUP. I used to look foreword to change. I want to look foreword to change.

... snip ... top of post, old email index

late 80s there was presentation at annual internal world-wide communication group conference that started out with the statement that the communication group was going to be responsible for the demise of the disk division (because the stranglehold that the communication group had on datacenters) ... and in the early 90s the company went into the red and there were rumors of the demise of mainframes ... recent references
https://www.garlic.com/~lynn/2011e.html#34 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#52 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011e.html#88 Would mainframe technology be relevant in the age of cloud computing?

above tome from somebody that was familiar with Rochester's running HPO in XA/ESA mode (Hudson valley shoving out the internal tool and shutting down internal competition) ... HPO XA/ESA mode reference:
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#90 A History of VM Performance
https://www.garlic.com/~lynn/2011e.html#27 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#30 vm370 running in "XA-mode"

Jim Gray and I had done the original "TELE" ... this is before he left for tandem. a few past references
https://www.garlic.com/~lynn/2006w.html#44 more secure communication over the network
https://www.garlic.com/~lynn/2009q.html#3 Arpanet
https://www.garlic.com/~lynn/2010d.html#61 LPARs: More or Less?

The celebration for Jim held at Berkeley, somebody from Tandem mentioned having done Tandem corporate online telephone book with Jim ... so I got up and mentioned having earlier done IBM corporate online telephone books with Jim
https://www.garlic.com/~lynn/2008i.html#51 Microsoft versus Digital Equipment Corporation

A couple recent posts referring to PROFS
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#77 Internet pioneer Paul Baran
https://www.garlic.com/~lynn/2011e.html#88 Would mainframe technology be relevant in the age of cloud computing?
https://www.garlic.com/~lynn/2011e.html#92 PDCA vs. OODA

a few old posts about RED vis-a-vis XEDIT:
https://www.garlic.com/~lynn/2001m.html#22 When did full-screen come to VM/370?
https://www.garlic.com/~lynn/2002p.html#39 20th anniversary of the internet (fwd)
https://www.garlic.com/~lynn/2003d.html#22 Which Editor
https://www.garlic.com/~lynn/2004o.html#36 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2006n.html#55 The very first text editor
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2007g.html#5 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2008h.html#43 handling the SPAM on this group
https://www.garlic.com/~lynn/2009c.html#52 THE runs in DOS box?
https://www.garlic.com/~lynn/2009c.html#54 THE runs in DOS box?
https://www.garlic.com/~lynn/2010i.html#36 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#11 Information on obscure text editors wanted

--
virtualization experience starting Jan1968, online at home since Mar1970


previous, , index - home