List of Archived Posts

2011 Newsgroup Postings (04/06 - 05/06)

coax (3174) throughput
Itanium at ISSCC
Car models and corporate culture: It's all lies
History of APL -- Software Preservation Group
Cool Things You Can Do in z/OS
Cool Things You Can Do in z/OS
New job for mainframes: Cloud platform
New job for mainframes: Cloud platform
New job for mainframes: Cloud platform
New job for mainframes: Cloud platform
History of APL -- Software Preservation Group
History of APL -- Software Preservation Group
New job for mainframes: Cloud platform
Car models and corporate culture: It's all lies
How is SSL hopelessly broken? Let us count the ways
Identifying Latest zOS Fixes
Jean Bartik, "Software" Pioneer, RIP
New job for mainframes: Cloud platform
21st century India: welcome to the smartest city on the planet
Jean Bartik, "Software" Pioneer, RIP
New job for mainframes: Cloud platform
WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
First 5.25in 1GB drive?
Fear the Internet, was Cool Things You Can Do in z/OS
Fear the Internet, was Cool Things You Can Do in z/OS
Fear the Internet, was Cool Things You Can Do in z/OS
First 5.25in 1GB drive?
First 5.25in 1GB drive?
US military spending has increased 81% since 2001
TCP/IP Available on MVS When?
TCP/IP Available on MVS When?
TCP/IP Available on MVS When?
At least two decades back, some gurus predicted that mainframes would disappear
At least two decades back, some gurus predicted that mainframes would disappear
Early mainframe tcp/ip support (from ibm-main mailing list)
At least two decades back, some gurus predicted that mainframes would disappear
Early mainframe tcp/ip support (from ibm-main mailing list)
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
VM IS DEAD tome from 1989
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
The first personal computer (PC)
CPU utilization/forecasting
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Massive Fraud, Common Crime, No Prosecutions
The first personal computer (PC)
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
First 5.25in 1GB drive?
A brief history of CMS/XA, part 1
Dyadic vs AP: Was "CPU utilization/forecasting"
Dyadic vs AP: Was "CPU utilization/forecasting"
US HONE Datacenter consolidation
Are Americans serious about dealing with money laundering and the drug cartels?
The first personal computer (PC)
At least two decades back, some gurus predicted that mainframes would disappear
Are Americans serious about dealing with money laundering and the drug cartels?
Drum Memory with small Core Memory?
Are Tablets a Passing Fad?
Are Tablets a Passing Fad?
Drum Memory with small Core Memory?
Dyadic vs AP: Was "CPU utilization/forecasting"
Drum Memory with small Core Memory?
Mixing Auth and Non-Auth Modules
The IBM Selective Sequence Electronic Calculator
Are Americans serious about dealing with money laundering and the drug cartels?
The IBM Selective Sequence Electronic Calculator
Bank email archives thrown open in financial crash report
Old email from spring 1985
program coding pads
how to get a command result without writing it to a file
Z chip at ISSCC
how to get a command result without writing it to a file
program coding pads
Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
The IBM Selective Sequence Electronic Calculator
Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
PIC code, RISC versus CISC
Overloaded acronyms
Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
DCSS ... when shared segments were implemented in VM
TSO Profile NUM and PACK
TSO Profile NUM and PACK
Bank email archives thrown open in financial crash report
program coding pads
program coding pads
SV: USS vs USS
Bank email archives thrown open in financial crash report
Gee... I wonder if I qualify for "old geek"?
Court OKs Firing of Boeing Computer-Security Whistleblowers
The first personal computer (PC)
CFTC Limits on Commodity Speculation May Wait Until Early 2012

coax (3174) throughput

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: coax (3174) throughput
Newsgroups: bit.listserv.ibm-main
Date: 6 Apr 2011 05:47:55 -0700
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Yes, however my curiosity is related only to coax port - coax port "channel". In this scope any other bottleneck does not apply. Of course in real word the weakest link of chain is the most important. BTW: 3174 can be channel-attached, and I guess that ESCON is not a bottleneck for coax, even 32 of them.

re:
https://www.garlic.com/~lynn/2011e.html#94 coax (3174) throughput

I never measured 3174 ... the 3274 had the opposite problem ... not only did moving lot of electronics out of the head back to shared control unit ... enormously increase coax cable chatter and slow down thruput (i.e. both the amount of chatter on the coax as well as latency for all the back and forth to support really "dumbed down" 3278) ... but the slow electronics in the 3274 had significant hit on (bus&tag) channel busy (transfer rate 640kbytes on the channel side ... but really slow handsaking made raw transfer rate only small part of the channel busy ... analogous to all the really slow handshaking on the coax side enormously slowing down effective response time and transfer rate).

I had done a project for the IMS group when STL was bursting at the seams and 300 were being put at remote site ... with datacenter support back to STL. They had tested "remote" 3278 support back to STL and found it truely horrible and totally unacceptable ... local channel attach 3278 were bad enough having hard time making subsecond response
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

but for "remote" 3278, it wasn't even "remotely" possile :-)

The side effect of doing support for channel-extender ... allowing "channel" attached 3274s controllers to put at the remote location (and providing 3278 response at remote location, that was indistinguishable from local channel attach), the channel-extender boxes had significantly faster channel interface processing ... getting the 3274s off the real channels improved local processor thruput by 10-15%.

When 3278s originally came out ... we complained loudly to the product group about 3278 interactive performance vis-a-vis 3277. Eventually the product group came back with the reply that 3278s weren't designed for interactive computing but for "data entry" (aka basically online "upgrade" for card punch machines).

The controller channel busy overhead (independent of raw transfer rate) was to raise its head again with 3090 and 3880 disk controllers (3mbyte transfer rate). The 3880 channel busy overhead turned out to be so high, 3090 product realized that it had to add a whole bunch additional channels ... which resulted in having to add an extra TCM to 3090 manufacturing (there were jokes that the 3090 group was going to bill the 3880 product group for the cost of the increased 3090 manufacturing cost). This was sort of the leading edge of theme that mainframes with enormous number of channels being a good thing (when it was actually to compensate for the channel/controller interface design and slow controllers would drastically reduce channel effectiveness). a couple recent posts:
https://www.garlic.com/~lynn/2011.html#37 CKD DASD
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

--
virtualization experience starting Jan1968, online at home since Mar1970

Itanium at ISSCC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Itanium at ISSCC
Newsgroups: comp.arch
Date: Wed, 06 Apr 2011 09:23:58 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the scars from these failures run deep. a couple years ago, I was involved in taking new generation of parallelizing technology to some industry groups ... which was accepted fairly well (for new round of straight through processing)... but as it moved up various member institutions, it was met with increasing resistance ... apparently the scars from the 90s failures may take decades to heal (or have those that experienced the failures replaced/retire)

re:
https://www.garlic.com/~lynn/2011e.html#93 Itanium at ISSCC

the 90s failures involved parallelization library software with APIs where application programmers had to design parallel applications and used the APIs to distributed the work. the issue wasn't just the overhead involved in the libraries distributing the parallelized work ... but the skill level required for programmers doing the (re-engineered) applications.

the "newer" generation of technology from a couple years ago ... involved taking high level business process specification and generating fine-grain SQL statements. The enormous amount of effort that has gone into RDBMS parallelization was then leveraged to gain high-throughput. The automated translating high-level business processs specification to SQL also significantly reduced the application programmer skill level.

so the new ten-core/20-thread XEONs from yesterday (allowing up to 256 sockets per configuration) ... could make one gangbuster RDBMS machine (especially running mission critical applications running enormous numbers of fine-grain SQL statements)

Intel Xeon E7 processor formula for mission-critical computing
http://www.physorg.com/news/2011-04-intel-xeon-e7-processor-formula.html
Intel Unveils 10-Core Xeons, Mission-Critical Servers - HotHardware
http://hothardware.com/Reviews/Intel-Unveils-10Core-Xeons-MissionCritical-Servers/

--
virtualization experience starting Jan1968, online at home since Mar1970

Car models and corporate culture: It's all lies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Apr, 2011
Subject: Car models and corporate culture: It's all lies
Blog: Greater IBM
I got roped into going to some of the auto industry C4 taskforce meetings in the early 90s about how to completely remake themselves. They talked about how the standard industry process took 7-8 yrs to come out with new car model ... with minor cosmetic changes for new model years (between "real" new models). Sometime they would have parallel new models with 3-4 yr offset. The issue was that foreign competition had earlier dropped to 3-4yrs to do a new model, was at 18months ... and appeared to be in the process of dropping to less than traditional annual model year. There were also some of the "mainframe" brethren at the meetings & I would chide them offline about what were they going to contribute to the effort ... since their product cycle (at the time) was nearly the same as the US auto industry.

In any case, the C4 meetings highlighted that the foreign competition was able to re-act to changing consumer preferences and leverage newer technology possibly ten times faster than the US auto industry. Also, even though they understood what the problem was and could articulate everything that needed to be done ... nearly two decades later they hadn't made significant changes (enormous vested interests holding back change).

related from IBM Jargon:
Mongolian Hordes Technique - n. A software development method whereby large numbers of inexperienced programmers are thrown at a mammoth software project (instead of deploying a small team of skilled programmers). First recorded in 1965, but popular as ever in the 1990s.

... snip ...

I had sponsored Boyd's briefings at IBM ... which included significant amount about lean & agile ... recent posts in similar (linkedin) Boyd group discussion:
https://www.garlic.com/~lynn/2011e.html#90
https://www.garlic.com/~lynn/2011e.html#92

IBM Jargon use to be purely internal document ... but has leaked out on to the internet and search engine finds numerous copies. Mongolian hordes technique was a pejorative reference to the practice.

In boyd's briefings, he would characterize the practice as epidemic in US business and a result of US entry into WW2. The scenario was that the US had to deploy huge inexperienced numbers and to leverage the few skilled resources available ... created a rigid, top-down, command&control structure ... the US WW2 strategy to win was the rigid, top-down, command&control structure using enormous overwhelming resources. Later the methodology was duplicated as the former young army officers moved up the corporate business ladder. Even when there were qualified, experienced people available ... they would be augmented with large number of unqualified and everybody treated as inexperienced and needing rigid, top-down, command&control structure.

misc. past references to Boyd (including being credited with battle strategy for the conflict in the early 90s ... as well as comments that the problem with the current round of conflicts was that Boyd had died in 1997)
https://www.garlic.com/~lynn/subboyd.html

Note that the change in IBM may have been accelerated by the Future System disaster ... this reference
https://people.computing.clemson.edu/~mark/fs.html

has quote from Chapter 3 of Charles Ferguson and Charles Morris, Computer Wars: The Post-IBM World, Times Books, 1993. An excerpt:
Most corrosive of all, the old IBM candor died with F/S. Top management, particularly Opel, reacted defensively as F/S headed toward a debacle. The IBM culture that Watson had built was a harsh one, but it encouraged dissent and open controversy. But because of the heavy investment of face by the top management, F/S took years to kill, although its wrongheadedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

another quote:
...perhaps the most damaging, the old culture under Watsons of free and vigorous debate was replaced with sycophancy and make no waves under Opel and Akers.

... snip ...

and the FS failure cast a dark shadow over the corporation for decades. misc. past posts
https://www.garlic.com/~lynn/submain.html#futuresys

during "FS", it may not have been particularly career enhancing to ridicule "FS" ... including drawing comparisons with a cult film that had been playing continuously in central sq.

another "FS" reference here:
http://www.jfsowa.com/computer/memo125.htm

there is also some discussion of FS here:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from IBM Jargon:
FS - n. Future System. A synonym for dreams that didn't come true. That project will be another FS. Note that FS is also the abbreviation for functionally stabilized, and, in Hebrew, means zero, or nothing. Also known as False Start, etc.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

History of APL -- Software Preservation Group

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Apr, 2011
Subject: History of APL -- Software Preservation Group
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group

Originally, SEs got quite a bit of training, effectively with "apprentice" type of program to more experienced SEs at customer site (for some customers, 20 or more on site). With the 23Jun69 unbundling announcement, not only was the start of charging for application software ... but also for SE services (for SEs times spent at customer), putting an end to the SE "apprentice" program. misc. past posts mentioning 23jun69 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

As substitute, several cp67 virtual machine datacenters were created to give (branch office) SEs "hands-on" practice with operating systems (running in virtual machines). In addition to CP67, virtual machines, CMS, GML, the internal network and a bunch of other stuff, the science center also ported APL\360 to CMS for CMS\APL ... fixing a lot of problems with allowing large demand-paged virtual memory workspaces (significantly larger than the traditional 16kbytes to 32kbytes that were common with apl\360) ...and providing API for accessing CMS services. This enabled a whole new generation of "real-world" APL applications.

Part of this was growing number of HONE APL-based sales&marketing support applications ... which eventually came to dominate all HONE activity (and the original HONE purpose died off) ... later HONE moved from cp67/cms to vm370/cms & APL\CMS. misc. past posts mentioning HONE and/or APL
https://www.garlic.com/~lynn/subtopic.html#hone

The science center also made its cp67/cms service available to other internal locations as well as students and staff at educational institutions in the cambridge area. A major, early CMS\APL application was from Armonk when the business planning people loaded the most valuable corporate asset on the science center cp67 system ... and developed customer business modeling and planning applications in APL (this also tested the cp67 security paradigm keeping all the univ. students away from the most valuable of corporate information assets).

--
virtualization experience starting Jan1968, online at home since Mar1970

Cool Things You Can Do in z/OS

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Cool Things You Can Do in z/OS
Newsgroups: bit.listserv.ibm-main
Date: 6 Apr 2011 12:22:18 -0700
sam@PSCSI.NET (Sam Siegel) writes:
Please consider the RAS on the US domestic phone switching network. It is a distributed system that (to my knowledge) does not use z/OS or zSeries hardware. You also have service providers like Google, the global DNS servers, etc. The list can be easily extended to demonstrate extremely good RAS overall on a distributed system where high RAS is deemed important.

long ago & far away ... my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture ... where she did Peer-Coupled Shared Data architecture ... which except for IMS hot-standby, saw very little update until sysplex & parallel sysplex. She didn't remain very long ... in part because of the slow uptake ... but also the periodic battles with the communication group trying to force her into using SNA for loosely-coupled operation. misc. past posts mentioning Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

when we were doing HA/CMP product for the company ... I had coined the terms disaster survivability and geographic survivability (to differentiate from disaster/recovery). I had also been asked to write a section for the corporate continuous availability strategy document ... but it got pulled when both Rochester (as/400) and POK (mainframe) complained (that they couldn't meet the requirements). We also did some work with the 1-800 service (database service that maps 1-800 numbers to the "real" exchange number ... required five-nines availability). misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

later we had some dealings with one of the large financial transaction infrastructures ... and they attributed their multiple year, 100% availability to

• geographically separated, replicated IMS hot-standby operation • automated operator

i.e. as hardware has became much more reliable ... unscheduled outages came to be dominated by environmental issues/outages and human mistakes

--
virtualization experience starting Jan1968, online at home since Mar1970

Cool Things You Can Do in z/OS

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Cool Things You Can Do in z/OS
Newsgroups: bit.listserv.ibm-main
Date: 6 Apr 2011 13:07:37 -0700
tony@HARMINC.NET (Tony Harminc) writes:
The criteria are quite different. A public phone system that connects 0.0001 percent of calls to the wrong place and drops a similar number in mid call is perfectly acceptable. A phone system (even a single local switch serving 10,000 lines) that is down for one minute a week is completely unacceptable.

re:
https://www.garlic.com/~lynn/2011f.html#4 Cool Things You Can Do in z/OS

there was an incident a few years back where the 1-800 mapping for major percentage of the POS (point-of-sale) card-swipe terminals in the US was down for 12 minutes during a mid-day period ... this was treated as a serious corporate incident between major transaction processor and major telco operation.

five-nines availability is something like 5min (total) outage per year (includes both scheduled and unscheduled).

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Apr, 2011
Subject: New job for mainframes: Cloud platform
Blog: MainframeZone
New job for mainframes: Cloud platform
http://www.computerworld.com/s/article/9214913/New_job_for_mainframes_Cloud_platform

from ablve:
As companies take steps to develop private clouds, mainframes are looking more and more like good places to house consolidated and virtualized servers. Their biggest drawback? User provisioning is weak.

... snip ...

I've repeatedly mentioned that virtual machine based cloud operations go back to the 60s ... some past posts
https://www.garlic.com/~lynn/submain.html#timeshare

... and the largest such operation in the 70s & 80s was internal, world-wide sales & marketing support HONE system. misc. past posts mentioining HONE
https://www.garlic.com/~lynn/subtopic.html#hone

CP67 did a couple things in the 60s to help open up 7x24 operation.

At the time, mainframes were leased and had monthly shift charges based on the processor meter (which ran whenever the processor &/or channels were active). Early deployments tended to have relatively light off-shift usage. One of the tricks was a terminal channel program sequence that left the line open to accept incoming characters but wouldn't run the channel (& processor meter) when no characters were arriving.

Another was significantly improving operator-less/dark-room off-shift operation to minimize operation costs during light off-shift operation.

CP67 was enhanced to automatically take a "dump" (to disk) and re-ipl/re-boot after a failure ... coming back up and available for service. One of the issues was that the growing number of service virtual machines (virtual appliances) still required manual restart. I then did the "autolog" command, originally for automatic benchmarking (could run large number of unattended benchmarks with system reboot between operation) ... discussed here:
https://www.garlic.com/~lynn/2010o.html#48

It then started being used for automatic startup of service virtual machines ... and after conversion from cp67 to vm370 ... the product group then picked up a number of CSC/VM features for VM370 release 3. old email refs:
https://www.garlic.com/~lynn/2006w.html#email750102 ,
https://www.garlic.com/~lynn/2006w.html#email750827

This (recent) post in (linkedin) IBM Historic Computing group discusses support for loosely-coupled (cluster), single-system-image, load-balancing & fail-over support done during the 70s by a number of large virtual-machine-based service operation (including HONE). Also, in the 70s, at least one virtual-machine based online commercial service bureau provided for migrating active users between processors in loosely-coupled (cluster) configuration ... supported non-disruptive removal of processor in cluster for things like scheduled downtown for preventive maintenance.
https://www.garlic.com/~lynn/2011e.html#79

In the mid-70s, the internal US HONE datacenters had been consolidated in silicon valley. Then in the early 80s, somewhat in response to earthquake ... the HONE cluster support was extended with a replicated datacenter in Dallas and then a 3rd in Boulder.

another side of the coin:

Cloud Use Rises, Mainframe Usage Declines as Data Centers Grow and Green, According to AFCOM Survey EON: Enhanced Online News
http://eon.businesswire.com/news/eon/20110330005393/en/cloud/disaster-recovery/data-center

some more in this server market segment comparison between 2002 & 2010 (part of yesterday's xeon announce): Intel Unveils 10-Core Xeons, Mission-Critical Servers - HotHardware
http://hothardware.com/Reviews/Intel-Unveils-10Core-Xeons-MissionCritical-Servers/

other details from that announce:

Performance + Reliability + Security = Intel Xeon Processor Formula for Mission-Critical Computing
http://newsroom.intel.com/community/intel_newsroom/blog/2011/04/05/performance-reliability-security-intel-xeon-processor-formula-for-mission-critical-computing?cid=rss-258152-c1-265964

IBM Jumps Into Cloud, Customers Tip-toe Behind
http://news.yahoo.com/s/pcworld/20110408/tc_pcworld/ibmjumpsintocloudcustomerstiptoebehind

a related recent discussion in (linkedin) Greater IBM discussion "History of APL -- Software Preservation Group" ... regarding an important part of IBM SE training was "apprentice" type program for new "SEs" in group of more experienced SEs onsite at customer accounts. This was cutoff with the 23Jun69 unbundling announcement that started to charge for SE services (as well as application software); requirement that individual SE time at customer accounts had to be charged for ... and couldn't figure out charging policy for the "apprentice" SEs.

This was the original motivation for internal HONE system (the largest "cloud" service in the 70s & 80s) ... several cp67 (virtual machine) datacenters with online, remote access for SEs in branch offices ... so that they could practice their operating system skills.

part of that recent thread:
https://www.garlic.com/~lynn/2011f.html#3

misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

another cloud related item:

Facebook Opens Up Its Hardware Secrets; The social network breaks an unwritten rule by giving away plans to its new data center--an action it hopes will make the Web more efficient
http://www.technologyreview.com/news/423570/facebook-opens-up-its-hardware-secrets/?p1=MstRcnt&a=f

for "HONE" related trivia ... upthread mentions that in mid-70s, the US HONE datacenters were consolidated in silicon valley. Do online satellite map search for Facebook's silicon valley address ... the bldg. next to it was the HONE datacenter (although the bldg. has a different occupant now).

Note that environmental and people mistakes had come to dominate outages quite some time ago. Countermeasures were clusters (loosely-coupled) as well geographically separated operation. Applications that provide for fall-over/take-over to handle local outages ... also can be used to mask increasingly rare local hardware faults.

Long ago and far away, my wife had been con'ed into going to POK to be in charge of (mainframe) loosely-coupled (cluster) architecture. While there she did Peer-Coupled Shared Data architecture ... but except for IMS hot-standby, there was little uptake (until sysplex & parallel sysplex) ... which contributed to her not remaining long in the position (that and battles with the communication group trying to force her into using SNA for loosely-coupled operation). misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata

A few years ago we were dealing with one of the largest financial transaction operation and they attributed their several year 100% availability to

• IMS hot-standby (at geographically separated sites) • automated operator

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: New job for mainframes: Cloud platform
Newsgroups: bit.listserv.ibm-main
Date: 8 Apr 2011 06:02:51 -0700
New job for mainframes: Cloud platform
http://www.computerworld.com/s/article/9214913/New_job_for_mainframes_Cloud_platform

from above:
As companies take steps to develop private clouds, mainframes are looking more and more like good places to house consolidated and virtualized servers. Their biggest drawback? User provisioning is weak.

... snip ...

also:
http://lnkd.in/F6X_3Y

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Apr, 2011
Subject: New job for mainframes: Cloud platform
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2001f.html#6 New job for mainframes: Cloud platform

also:
http://lnkd.in/F6X_3Y

post from two years ago: "From The Annals of Release No Software Before Its Time"
https://www.garlic.com/~lynn/2009p.html#43

referring to an announcement regarding z/VM clustering (HONE had done vm clustering including geographic distributed, 30yrs earlier):
http://www.vm.ibm.com/zvm610/zvm61sum.html

and also purescale announcement:
http://www-03.ibm.com/press/us/en/pressrelease/28593.wss

referring to this regarding RDBMS scale-up in Jan92
https://www.garlic.com/~lynn/95.html#13

when the mainframe DB2 group commented that if I was allowed to go ahead, it would be at least five yrs ahead of them. Within a couple weeks, the effort was transferred, announced as supercomputer for numerical intensive, technical & scientific *only*, and we were told we couldn't work on anything with more than four processors.

It was about the same time I had been asked to write a section for the corporate continuous availability strategy document ... but it got pulled when both rochester (as/400) and POK (mainframe) complained (that they couldn't meet the objectives).

Some more in (linkedin) Greater IBM discussion about Watson Ancestors ... archived here:
https://www.garlic.com/~lynn/2011d.html#7 ,
https://www.garlic.com/~lynn/2011d.html#24 ,
https://www.garlic.com/~lynn/2011d.html#29 ,
https://www.garlic.com/~lynn/2011d.html#40

Mainframe SQL/DS came direct from System/R (original SQL/RDBMS implementation) ... and mainframe DB2 is descendent of both SQL/DS and System/R ... misc. past posts mentioning System/R
https://www.garlic.com/~lynn/submain.html#systemr

At the time we were doing HA/CMP ... non-mainframe DB2 (code-name shelby) was being written in C for OS/2, so we were working with major non-mainframe RDBMS; Ingres, Sybase, Informix, and Oracle. Most had some form of VAX/cluster implementations. In working with them ... providing a vax/cluster-like API for the distributed lock manager (DLM) seemed to the fastest way to shipping cluster scale-up products. Going about the DLM implementation, the various RDBMS vendors had strong feelings about things that could be significantly improved in the VAX/cluster implementation (for throughput, scale-up, and recovery) ... in addition to the experience I already had doing a number of cluster efforts.

Part of the mainframe issue was that the massive new (mainframe) DBMS effort in STL had been EAGLE ... which pretty much allowed System/R and SQL/DS activity to go unnoticed. It wasn't until EAGLE effort was crashing ... that they asked how fast could a (mainframe/MVS) RDBMS be turned out ... resulting in MVS/DB2 not shipping until 1983 (making it fairly late to the RDBMS game)

some recent posts discussing EAGLE, System/R, SQL/DS, RDBMS
https://www.garlic.com/~lynn/2011d.html#42
https://www.garlic.com/~lynn/2011d.html#52
https://www.garlic.com/~lynn/2011e.html#16

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: New job for mainframes: Cloud platform
Newsgroups: bit.listserv.ibm-main
Date: 8 Apr 2011 09:02:54 -0700
ibm-main@TPG.COM.AU (Shane Ginnane) writes:
And how is any of this news ?. (comment aimed at the wider community, not Lynn specifically)

re:
https://www.garlic.com/~lynn/2011f.html#7

previous was from several days ago ... more recent items from today:

IBM Jumps Into Cloud, Customers Tip-toe Behind
http://www.pcworld.com/businesscenter/article/224688/ibm_jumps_into_cloud_customers_tiptoe_behind.htm

IBM Forecasts $7 Billion In Cloud Revenue
http://www.informationweek.com/news/global-cio/interviews/showArticle.jhtml?articleID=223800165

there is also some additional discussion in the linkedin mainframe group URL
http://lnkd.in/F6X_3Y

--
virtualization experience starting Jan1968, online at home since Mar1970

History of APL -- Software Preservation Group

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 09 Apr, 2011
Subject: History of APL -- Software Preservation Group
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group

quick search turns up this reference as follow-on (mentions STAIRS source code was being made available to ease customer conversion):
http://www-01.ibm.com/software/data/sm370/about.html

web search turns up a number of products/services that had been based on STAIRS.

code fixes/PTFs was (field engineering) "RETAIN" ... also available from the branch office terminals.

This is old post with PARASITE/STORY for automatic (remote) logon to RETAIN ... and fetch. PARASITE/STORY was an internal application that ran on CMS (utilizing VM370's virtual device/terminal support) and provided HLLAPI-like scripting capability (before IBM/PCs)
https://www.garlic.com/~lynn/2001k.html#36

Old post mentioning somebody printed an "April 1st" memo (about passwords) on corporate letter head paper on one of the 6670s (started out as ibm copier/3 with computer interface) in bldg. 28 and put them in all the bldg. bulletin boards.
https://www.garlic.com/~lynn/2001d.html#53

bldg. 28 also had project to enhance 6670 to support APA6670/sherpa (all-points-addressable ... be able to do images, also supporting 6670 as scanner)
https://www.garlic.com/~lynn/2006p.html#email820304

--
virtualization experience starting Jan1968, online at home since Mar1970

History of APL -- Software Preservation Group

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 09 Apr, 2011
Subject: History of APL -- Software Preservation Group
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#10 History of APL -- Software Preservation Group

recent post (in linkedin Boyd group):
https://www.garlic.com/~lynn/2011e.html#92
also
http://lnkd.in/j9U4bS

with PROFS entry from IBM Jargon:
PROFS - profs n. Professional Office System. A menu-based system that provides support for office personnel such as White House staff, using IBM mainframes. Acclaimed for its diary mechanisms, and accepted as one way to introduce computers to those who don't know any better. Not acclaimed for its flexibility. PROFS featured in the international news in 1987, and revealed a subtle class distinction within the ranks of the Republican Administration in the USA. It seems that Hall, the secretary interviewed at length during the Iran-Contra hearings, called certain shredded documents PROFS notes as do IBMers who use the system. However, North, MacFarlane, and other professional staff used the term PROF notes. v. To send a piece of electronic mail, using PROFS. PROFS me a one-liner on that. A PROFS one-liner has up to one line of content, and from seven to seventeen lines of boiler plate. VNET

... snip ...

the PROFS entry is also quoted in this post about SNA/VTAM misinformation
https://www.garlic.com/~lynn/2011e.html#57

in the late 80s when the communication group was lobbying to convert the internal network to SNA/VTAM one of the things they did was tell the top executives that PROFS was a VTAM application. Above thread also references some old email of theirs claiming that SNA/VTAM was also applicable for NSFNET backbone (i.e. operational precursor to the modern internet, aka tcp/ip). misc. past posts:
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

The author of PARASITE/STORY (upthread reference) also was the author of VMSG. The PROFS group used a very early copy of VMSG for their email client. When the VMSG author offered the PROFS group a current/enhanced copy of VMSG, they tried to get him fired (having claimed to have done all of PROFS themselves). Things quieted down after the VMSG author pointed out that every PROFS message in the world carried his initials in a non-displayed field. After that, he only shared the VMSG source with two other people.

a little x-over from the "Car Models and corporate culture: It's all lies" ... also archived here
https://www.garlic.com/~lynn/2011f.html#2

one could make the case that the aftermath of FS with no dissension was still active (at least) throughout the 80s and 90s.

In the mid-80s, top executives were predicting that revenue was going to double from $60B to $120B ... and there was massive program to build-out manufacturing capacity (mostly mainframe related, to meed the projected demand). However, at the time, it was relatively trivial to show that things were heading in the opposite direction (and the company going into the red a few years later)

and as mentioned upthread, the disk division comment about communication group being responsible for the demise of the disk division.

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: New job for mainframes: Cloud platform
Newsgroups: bit.listserv.ibm-main
Date: 11 Apr 2011 07:38:12 -0700
timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
I've known HP in its sales pitches to make a lot of fuss about endianness as reason why it would be oh-so-difficult for an HP-UX customer to move to Linux on X86, or for a Linux X86 customer to move to (or add) Linux on System z, depending on their sales situation. Then hundreds/thousands of HP customers moved without endianness difficulty, and many more will follow. The IT community figured out how to flip bit order a long time ago. Before System/360, even. That's not to say endianness isn't a problem...for HP. If they want to move HP-UX to a little endian CPU, they'll have a lot of investment to do (as Sun did for Solaris X86). For non-OS kernel/non-compiler programmers, which is the vast majority of us, it's not a real-world problem. In fact, endianness is one of the least interesting issues when porting from one CPU to another.

re
https://www.garlic.com/~lynn/2011f.html#7 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#9 New job for mainframes: Cloud platform

when I was undergaduate in the 60s, some people from the science center came out and installed (virtual machine) cp67 on the 360/67 (as alternative to tss/360). cp67 had "automatic" terminal identification for 1052 & 2741 ... playing games switching the line-scanners with the 2702 SAD command. The univ. had bunch of TTY/ascii terminals ... so I set out to add TTY/ascii support also doing automatic terminal identification. It almost worked ... being able to dynamically identify 1052, 274, & TTY for directly/fixed connect lines.

I had wanted to have a single dial-up number for all termainls ... with "hunt-group" ... allowing any terminal to come in on any port. The problem was that the 2702 took a short-cut and hardwired the line-speed for each port. This somewhat prompted the univ. to do a clone controller effort ... to dynamically do both automatic termeinal & automatic speed determination (reverse engineer channel interface, build controller interface board and program minicomputer to emulate 2702).

Two early "bugs" that stick in my mind ...

1) the 360/67 had high-speed location 80 timer ... and if the channel interface board held the memory bus for two consecutive timer-tics (a timer-tic to update location 80 was stalled because memory bus was held ... and the next timer-tic happened while the previous timer-tic was still pending), the processor would stop & redlight

2) initial data into memory was all garbage. turns out had overlooked bit memory order. minicomputer convention was leading (byte) bit off the line started off into high-order (byte) bit position ... while 2702 line-scanner convention was to place leading (byte) bit off the line in the lower order (byte) bit position. while the minicomputer then was placing data into memory in line-order bit positiion ... each byte had the bit order reversed compared to the 2702 convention (standard 360 ascii translate tables that I had borrowed from BTAM handled the 2702 bit-reversed bytes).

... later, four of us get written up for being responsible for some portion of the mainframe clone controller business. A few years ago, in large datacenter, I ran across a descendent of our original box, handling a major portion of the dial-up POS cardswipe terminals in the country (some claim that it still used the original channel interface board design).

I had posted same cloud item in a number of linkedin mainframe group
https://www.garlic.com/~lynn/2011f.html#6 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform
also
http://lnkd.in/F6X_3Y

also refers to internal (virtual machine) HONE system being the largest "cloud" operation in the 70s & 80s. In the mid-70s, the US HONE datacenters were consolidated in silicon valley ... where it created the largest single-system-image cluster operation. Then in the early 80s, because of earthquake concerns, it was replicated in Dallas ... with distributed, load-balancing and fall-over between Dallas & PaloAlto ... eventually growing to 28 3081s. misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

HONE also discussed in this linkedin Greater IBM (current & former IBM employee) group about APL software preservation (major portion of HONE applications supporting worldwide sales & marketing had been implemented in APL; numerous HONE-clones all around the world):
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#10 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group

another cloud related item:

Facebook Opens Up Its Hardware Secrets; The social network breaks an unwritten rule by giving away plans to its new data center--an action it hopes will make the Web more efficient
http://www.technologyreview.com/news/423570/facebook-opens-up-its-hardware-secrets/?p1=MstRcnt&a=f

for "HONE" related trivia ... the silicon valley HONE datacenter; do online satellite map search for Facebook's silicon valley address ... the bldg. next to it was the HONE datacenter (although the bldg. has a different occupant now).

--
virtualization experience starting Jan1968, online at home since Mar1970

Car models and corporate culture: It's all lies

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Apr, 2011
Subject: Car models and corporate culture: It's all lies
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011f.html#2 Car models and corporate culture: It's all lies

a little x-over from "History of APL" discussion about mid-80s when top executives were predicting that world-wide revenue would double from $60B to $120B .... mostly mainframe ... even when it was fairly trivial to show business was heading in opposite direction (and company goes into the red just a few yrs later). Part of the strategy was a massive manufacturing building program ... apparently to double the manufacturing capacity (again mostly mainframe related).
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#10 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group

However, past discussions that "fast track" program also became especially epidemic during the period (possibly trying to also double the number of executives) ... from IBM Jargon:
fast track - n. A career path for selected men and women who appear to conform to the management ideal. The career path is designed to enhance their abilities and loyalty, traditionally by rapid promotion and by protecting them from the more disastrous errors that they might commit.

... snip ...

which seemed to have side-effect that large number of executives spending massive amount of their time & effort "managing" their careers (as opposed to day-to-day corporate & business activity).

other recent posts mentioning "fast track":
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011d.html#1 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#6 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#24 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011d.html#78 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#79 Mainframe technology in 2011 and beyond; who is going to run these Mainframes?
https://www.garlic.com/~lynn/2011e.html#45 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#90 PDCA vs. OODA

--
virtualization experience starting Jan1968, online at home since Mar1970

How is SSL hopelessly broken? Let us count the ways

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Apr, 2011
Subject: How is SSL hopelessly broken? Let us count the ways
Blog: Facebook
How is SSL hopelessly broken? Let us count the ways
http://www.theregister.co.uk/2011/04/11/state_of_ssl_analysis/

we had been called in to consult with small client/server startup that wanted to do payment transactions on their server, they had also invented this technology called SSL they wanted to us (the result is now frequently called "electronic commerce"). By the time we were finished with the deployments ... most of the (current) issues were very evident.

very early in the process I had coined the term "comfort certificate" ... since the digital certificate actually created more problems than it solved ... in fact, in many cases, it was totally redundant and superfluous ... and existed somewhat as magic "pixie dust" ... lots of old posts mentioning SSL digital certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

There was two parts of SSL deployment for "electronic commerce" ... between the browser and webserver and between the webserver and the "payment gateway" ... I had absolute authority over interface deployment involving "payment gateway" ... but had only advisery over browser/webserver. There were a number of fundamental assumptions related to SSL for browser/webserver secure deployment ... that were almost immediately violated by merchant webservers (in large part because of the high overhead of SSL cut their throughput by 85-90%). I had mandated mutual authentication for webserver/gateway (implementation didn't exist originally) and by the time deployment was done the use of SSL digital certificates was purely a side-effect of the crypto library being used
https://www.garlic.com/~lynn/subnetwork.html#gateway

primary use of SSL in the world today is electronic commerce for hiding payment transaction information. The underlying problem is the transaction information is dual-use both authentication and needed by dozens of business processes at millions of of places around the world.

In the X9A10 financial working group we directly addressed the dual-use problem with the x9.59 financial standard (directly addressing the problem, x9.59 is significantly lighter weight than SSL as well as KISS). This eliminated the need to hide the transaction details ... also eliminates the threat from majority of data breaches (doesn't eliminate data breaches, just eliminates crooks being able to use the information for fraudulent purposes). Problem (as always) is there is significant vested interests in the current status quo.
https://www.garlic.com/~lynn/subpubkey.html#x959

we were also tangentially involved in the cal. state data breach notification legislation. we had been brought in to help wordsmith the cal. state electronic signature legislation. many of the parties were involved in privacy issues and had done detailed, indepth public surveys and found #1 issue was identify theft, and major form was account fraud as a result of data breaches. there was little or nothing being done about such breaches (security measures are nominally used to protect party involved, breaches weren't affecting those with the data). It was hoped that publicity from breach notification might motivate the industry to take countermeasures
https://www.garlic.com/~lynn/subpubkey.html#signature

crooks used consumer transaction information from breach to perform fraudulent transactions against consumer accounts. the merchants and financial institutions that had the breach didn't have any (direct) problems (until the breach notification law made it public).

x9.59 financial transaction standard addressed it in totally different way ... it eliminated crooks being able to use information to perform fraudulent transactions ... so it eliminated needing to hide the information ... so it eliminated the need to use SSL to hide the information ... and it eliminated the financial motivation for crooks to perform breaches (since the information was of no practical use).

aka breach notification came about because enormous amount of fraud was happening to consumers as a result of the breaches ... and most of those that needed to protect/hide the data couldn't care less (they were doing little or nothing to prevent breaches). x9.59 financial standard addressed it differently, it eliminated the ability for crooks to use the information to perform fraudulent transactions.

With regard to x9.59 standard, all software would support whatever is required (like starbucks accepting barcode generated by iphone app). X9.59 specification was originally done by financial industry to address enormous fraud. However, large institutions in US turned out to be getting approx. half their bottom line from "payment transaction fees" (charged merchants) that had been set proportional to fraud. Eliminating that fraud could cut the related fees by better than order of magnitude ... also basically commoditizes payment industry and lowers barrier to entry for competition. Eliminating nearly all such fraud (and drastically reducing fees) met enormous resistance from bank business people (effectively making significant profit from fraud). Mistake is thinking of x9.59 as a technology solution ... it is a business specification ... doesn't directly involve consumers ... there are bookshelves of current specifications that consumers aren't aware of.

--
virtualization experience starting Jan1968, online at home since Mar1970

Identifying Latest zOS Fixes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Identifying Latest zOS Fixes
Newsgroups: bit.listserv.ibm-main
Date: 11 Apr 2011 15:33:11 -0700
mike.a.schwab@GMAIL.COM (Mike Schwab) writes:
Most of your Micro$oft and Linux errors are due to the C language defining an end of string as x'00', and the programmer forgetting to check the lenght of the input against the buffer. The the hacker sends a malformed string to that function and overlays the program code and takes control.

buffer length related problems dominated through the 90s ... misc past posts
https://www.garlic.com/~lynn/subintegrity.html#buffer

much of desktop evolved in purely stand-alone enviornment ... with some early (3270) terminal emulation. then was added, private, business, closed, "safe", network support ... with lots of applications & files that included automated "scripted" enhancements. At the 1996 MDC (held at Moscone) there were huge number of banners about moving to internet (simple remapping of the networking conventions w/o the corresponding countermeasures involved moving from "safe" environment to extremely "hostile" environment; periodic analogy with going out airlock into open space w/o space suit).

However, the constant subtheme (at '96 MDC) was "protecting your investment" ... referring to all the scripting capability. Starting early part of this century, such exploits began to eclipse the buffer length problems ... along with the heavy weight security convention of analyze/filtering incoming files against an enormously bloated library of possible exploit "signatures"

old post doing word frequency analysis of CVE "bug" reports ... and suggesting to mitre that they require a little more formal structure in the reports (at the time, I got pushback that they were lucky to get any reasonable verbage):
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE

more recent reference to CVE ... which has since moved to NIST
https://www.garlic.com/~lynn/2011d.html#8 Security flaws in software development

note that the original mainframe tcp/ip protocol stack had been done in vs/pascal ... and suffered none of the buffer length exploits found in C-language implementations. there were other thruput and pathlength issues with that implementation ... but I did the RFC1044 enhancements for the implementation ... and in some testing at Cray Research ... got sustained channel media throughput between Cray and 4341 ... using only modest amount of 4341 processor. misc. past posts mentining RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Jean Bartik, "Software" Pioneer, RIP

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Jean Bartik, "Software" Pioneer, RIP
Newsgroups: alt.folklore.computers
Date: Mon, 11 Apr 2011 23:44:38 -0400
Louis Krupp <lkrupp_nospam@indra.com.invalid> writes:
It's mildly interesting that at Burroughs, IBM was said to use the "Mongolian horde" approach to development, while Burroughs developers prided themselves on being a "band of itinerant tinkerers."

from IBM Jargon:
Mongolian Hordes Technique - n. A software development method whereby large numbers of inexperienced programmers are thrown at a mammoth software project (instead of deploying a small team of skilled programmers). First recorded in 1965, but popular as ever in the 1990s.

... snip ...

recent posts mentioning the above:
https://www.garlic.com/~lynn/2011d.html#1 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
https://www.garlic.com/~lynn/2011e.html#90 PDCA vs. OODA
https://www.garlic.com/~lynn/2011e.html#92 PDCA vs. OODA

recent posts (also) referencing the culture:
https://www.garlic.com/~lynn/2011f.html#2 Car models and corporate culture: It's all lies
https://www.garlic.com/~lynn/2011f.html#13 Car models and corporate culture: It's all lies

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: New job for mainframes: Cloud platform
Newsgroups: bit.listserv.ibm-main
Date: 12 Apr 2011 07:48:47 -0700
graeme@ASE.COM.AU (Graeme Gibson) writes:
Well, let's not skew the kiddie's brains too much..

re:
https://www.garlic.com/~lynn/2011f.html#12 New job for mainframes: Cloud platform

yes, well ... I thot it was also interesting that 2702 (IBM line scanners) managed to (also) reverse bits within bytes (before x86 even appeared on the scene).

other trivia was HP had major hand in Itanium (designed to be dual-mode ... both big-endian & little-endian) ... which at one time was going to be the "mainframe killer" ... since then lots of Itanium business critical features have been migrated to XEON chips (and various recent news items projecting death of Itanium).

major person behind wide-word & Itanium had earlier been responsible for 3033 dual-address space mode ... retrofitted a little of 370-xa access registers to 3033 to try and slow the exploding common segment problem (with 24-bit, 16mbyte virtual address space ... and MVS kernel image taking half of each virtual address space ... large installations were approaching situation where CSA was going to be 6mbytes ... reducing space for applications to 2mbytes). itanium stuff
http://www.hpl.hp.com/news/2001/apr-jun/worley.html
other pieces from wayback machine:
https://web.archive.org/web/20010722130800/www.hpl.hp.com/news/2001/apr-jun/2worley.html
https://web.archive.org/web/20000816002838/http://www.hpl.hp.com/features/bill_worley_interview.html

internal IBM had some critical chip-design tools implemented in Fortran running on large number of carefully crafted MVS systems ... and were having increasingly difficult time to keep the application under 7mbytes (MVS kernel image at 8mbytes and minimum CSA size was 1mbyte, leaving maximum of 7mbytes for applications) ... they were being faced with having to convert the whole operation to vm/cms ... since that would allow them to have nearly whole 16mbytes for application.

--
virtualization experience starting Jan1968, online at home since Mar1970

21st century India: welcome to the smartest city on the planet

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 12 Apr, 2011
Subject: 21st century India: welcome to the smartest city on the planet
Blog: Greater IBM
21st century India: welcome to the smartest city on the planet
http://www.guardian.co.uk/world/2011/mar/06/india-lavasa-computer-technology

from above:
From rocket science to DNA research, India is ridding itself of its poor country image. In this extract from her book Geek Nation, Angela Saini visits Lavasa, an emerging electronic 'dream city'

... snip ...

In the 90s, there was report that half of the US advanced technology (STEM) degrees were to asia & india ... and it would only take slight changes in relative relation between their home economy and the US economy to reach tipping point and they return to their home country. US had whole technology sectors dominated by US graduates born in asia and india, that had their education paid for by their home governments, were suppose to remain in US industry for 5-7 yrs and were then obligated to return home.

While not so extreme yet, after the recent financial disaster decade ... there are large number of reports about the disappearing US middle class with extreme shift in wealth to the top 1%.

many of the STEM articles ... fail to differentiate the percent of US graduates that are foreign born and the conditions & percent likely to return home and/or even obligated to return home
https://en.wikipedia.org/wiki/STEM_fields

that is separate from the large increase in STEM graduates from institutions in Asia.

a little NSF STEM related (I think they are trying to avoid using the word "crisis")
http://www.nsf.gov/nsb/stem/
http://www.smdeponews.org/funding-opportunities/nsf-seeks-proposals-for-transforming-stem-learning/
http://www.ptec.org/items/detail.cfm?id=9186
http://www.nsf.gov/pubs/2009/nsf09549/nsf09549.pdf

misc. past posts mentioning STEM:
https://www.garlic.com/~lynn/2007s.html#22 America Competes spreads funds out
https://www.garlic.com/~lynn/2010b.html#19 STEM crisis
https://www.garlic.com/~lynn/2010b.html#24 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#26 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#56 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#57 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#59 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#87 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#1 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#38 F.B.I. Faces New Setback in Computer Overhaul
https://www.garlic.com/~lynn/2010f.html#45 not even sort of about The 2010 Census
https://www.garlic.com/~lynn/2010f.html#84 The 2010 Census
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010m.html#37 A Bright Future for Big Iron?
https://www.garlic.com/~lynn/2010p.html#78 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010q.html#69 No command, and control
https://www.garlic.com/~lynn/2011.html#80 Chinese and Indian Entrepreneurs Are Eating America's Lunch
https://www.garlic.com/~lynn/2011b.html#0 America's Defense Meltdown
https://www.garlic.com/~lynn/2011c.html#45 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

Jean Bartik, "Software" Pioneer, RIP

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Jean Bartik, "Software" Pioneer, RIP
Newsgroups: alt.folklore.computers
Date: Tue, 12 Apr 2011 13:16:36 -0400
re:
https://www.garlic.com/~lynn/2011f.html#16 Jean Bartik, "Software" Pioneer, RIP

past post
https://www.garlic.com/~lynn/2011f.html#44 Happy DEC-10 Day

that in the wake of Jim Gray leaving and his "MIP Envy" tome, there were visits (& trip reports) to a number of other institutions (for comparison)

20Sep80 version of MIP Envy
https://www.garlic.com/~lynn/2007d.html#email800920
in this post
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing

also references 24Sep80 version
https://web.archive.org/web/20081115000000*/http://research.microsoft.com/~gray//papers/CritiqueOfIBM%27sCSResearch.doc

this has summary of some of the dataprocessing at some of the visited (summer 1981) institutions
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)

--
virtualization experience starting Jan1968, online at home since Mar1970

New job for mainframes: Cloud platform

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: New job for mainframes: Cloud platform
Newsgroups: bit.listserv.ibm-main
Date: 12 Apr 2011 11:47:12 -0700
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Access registers were ESA; they were announced for the 3090. Was the 3081 a testbed for them, or was that a typo?

re:
https://www.garlic.com/~lynn/2011f.html#12 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#17 New job for mainframes: Cloud platform

sorry ... access registers were in the "811" architecture documents (supposedly named for the nov78 date on most of the documents) ... "811" pieces then leaked out in various machine levels. some of the (3033 &) 3081 discussed in detail here
http://www.jfsowa.com/computer/memo125.htm

... "811", 3033 and 3081 were hurry-up patch up efforts recovering from the FS disaster ... some more discussion in this (linkedin) "Greater IBM" (current & former IBMers)
https://www.garlic.com/~lynn/2011f.html#2 Car models and corporate culture: It's all lies
https://www.garlic.com/~lynn/2011f.html#13 Car models and corporate culture: It's all lies

it wasn't until 3090 ... that you start to see a "real" new machine (including vm/cms 4361&3370s being the service processor in all 3090s ... even 3090s that nominally had operating system w/o 3370FBA support).

--
virtualization experience starting Jan1968, online at home since Mar1970

WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Apr, 2011
Subject: WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
Blog: Greater IBM
Getting blamed for online computer conferencing in the late 70s and early 80s on the internal network ... some of the Tandem Memos were packaged and sent to each member of the executive board (300 pages printed dual-sided on 6670 ... packaged in "Tandem" 3-ring binders ... folklore is that 5of6 immediately wanted to fire me). from IBM Jargon:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products.

... snip ...

even had short article in Nov81 Datamation.

Note that pre-backbone (approaching 1000 nodes) was pretty much in-spite of the company. There was even an Armonk "investigation" into the phenomenon ... and it concluded that it couldn't exist. The logic was that if traditional SNA architecture principles had been used, to have developed the software for such a network would have required more resources than was currently available in the whole corporation.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Of course the internal network used a totally different approach ... and in the late 80s, even layering SNA links into the internal network still required enormous investment ... both software and additional hardware ... at that time, it would have been enormously more cost/effective to have layered it on TCP/IP (rather than SNA); mention in this (greater ibm) thread:
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

First 5.25in 1GB drive?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First 5.25in 1GB drive?
Newsgroups: alt.folklore.computers
Date: Wed, 13 Apr 2011 16:42:11 -0400
from long ago and far away ...
Date: 01 Dec 88 17:42:54
To: wheeler
Subject: Trip Report COMDEX/FALL'88

I spent 2 days at the COMDEX/FALL 1988, primarily was interested in the magnetic/optical disks/interface status in the industry.

1. Embedded SCSI drives:

(a) 5 1/4in SCSI magnetic disks:

There are some new announcements but really no surprises. Most venders have disks in the range of 14-16 ms seek time, 1.875-2.4 MB/S raw disk data rate and 8.33 ms latency. Most disks capacity are in the range of 380MB/760MB, unformatted. These are the Maxtor 8380S/8760S class of disks, perhaps with a little better SCSI overheads and command decode time. No company announces Redwing-class disk (with 12 ms seek, 6 ms latency, 1.5 ms track-to-track seek, 3MB/S raw data rate and 1GB unformatted capacity). The venders are still working on the higher RPM and better seek (servo) mechanism.

           CAPACITY    AVG SEEK/TRK-TRK   LATENCY   RAW DATA RATE
(MB) FMT        (MS)             (MS)       (MB/S)
HP97540      663           17/3             7.47        2.5

CDC WREN 6   676           16/3             8.33        1.875
CDC RUNNER   330         10.7/4             8.33        1.875
(SHORT STROKE)

NEWBURY 9820 690           16/2.6           8.33        1.875

MICROPOLIS   660-890       14/3             8.33        2.5
1590's

HITACHI      661           16/4             8.33        2.4
DK515C-78

TOSHIBA      330           18/5             8.33        1.875
MK-250F

SIEMENS      660           16/5             8.43        1.875
4400

FUJITSU      660           16/4             8.33        1.875
M2263

FUJITSU      660           16/4             8.33        1.875

****
REDWING      857           12/1.5           6.02        3

MAXTOR       670           18/3             8.33        1.875

(b) 3 1/2in SCSI magnectic disks:

Most announcements of 3 1/2in disks are not in the high performance
area, like Lighting file.  Seagate, Fujitsu, Miniscribe, and Toshiba
are in this area (seek time > 20 ms, trk to trk > 6 ms, capacity
less than 120 MB).  Only two venders announcements have higher
performance:

CAPACITY    AVG SEEK/TRK-TRK   LATENCY   RAW DATA RATE
(MB) FMT        (MS)             (MS)       (MB/S)
CONNER       210           19/5             8.33        1.5
CP-3200

MICROPOLIS   200           15/4             8.33        1-1.75
1770                                              (Zone Bit Recording)

****
LIGHTING     320           12.5/1.5         6.89          2

SUMMARY: Most SCSI drives announced in the exhibit have a readahead buffer in their embedded controller. Some venders have less SCSI overheads than current Maxtor's drive (827 us), but probably still in the range of 600 us. Maxtor will use 16-bit National HPC controller in their next generation controller. We can expect that the total SCSI overheads will drop to the range of 200 us by then. The disk industry will follow this direction to fix current SCSI number 1 problem, i.e. excessive bus overheads and command decode overhead **.

Most high-performance drives announced in Comdex do not use ESDI below SCSI interface. As a result, a much better command decode time ** than Maxtor's 8380S/8760S (1.6 ms) is realized (400-600 us).

** Command decode overhead is the time from receiving the last command parameter to the time the seek starts or in the case of readahead buffer hit, from the last command parameter to the time to locate the data in the buffer.

The Redwing and Lightning are the best breed of drives in 1989. It is important that we can ship these files in Rel. 1 to offer better price/performance and compete with the high-performance IPI2 disks.

2. IPI2 drives:

Fujitsu and NEC announced 3MB/S IPI-2 disks last year in 8in or 9in form factor (avg. seek 15 ms, trk to trk 4 ms, 8.33 ms latency). CDC shows a 8in 6MB/S IPI-2 disk (1GB), priced at $8700 in a single unit. This data rate is achieved by pairing heads and twin read/write channels that operated in parallel. The NEC and Fujitsu will announce similar disks in the near future perhaps with even higher data rate. The IPI-2 bus operates (2 byte) at 10 MB/S. With parallel disk technology (pioneered by Fujitsu), IPI-2 disks can easily go up to 10 MB/S.

There is no IPI-2 disk in 5 1/4in form factor yet. Xylogics and Interphase each have announced a IPI-2 controller for the VME bus. It is not a industry secret that IPI2 is the heir to SMD interface. The conversion from SMD to IPI is a natural path for higher DASD subsystem performance. How quickly that happens depends largely on the price of IPI drives. The press predicts that the first conversions from SMD to IPI2 will occur in mid-1989.

IBM is working a IPI2 disk from the Sutter technology (4.5 MB/S), priced at $7000 in OEM quantities (2 actuators). The OEM group in San Jose already discloses Sutter to some 20 companies, including Sun Microsystems. Sun is working on a VME IPI2 controller (modify a vender's controller, so that it will be difficult to be cloned by other controller house). The information from San Jose OEM group suggests that Sun will ship the new 20-MIPS workstation with much more powerful bus than the VME they currently used. Sun is also looking for parallel disks for their high-end applications.

3. Parallel disks/striping:

Most parallel disks (multiple heads within a disk) are limited to the 8in-and-above form factor. The typical users are supercomputer, minisupercomputer and graphics/image processing. It is still a niche market in 1988. It may change in next year as more powerful workstations and IPI2 disks available. Apollo Computer has announced a disk striping product by ganging 4 ESDI disks together (5 1/4in) for their 1000 series superworkstation.

In October, 1988, Micropolis scrubbed its parallel disk array. It was not related to the technology according to the press. There was no large enough market to sustain the program. The real reason was that most systems makers would rather build the drive arrays themselves than buy a finished subsystem on an OEM basis. This explains that I don't see too many 'drawers' or 'disk arrays' products in the Camdex show.

Parallel disk (parallel heads) venders:

CAPACITY    AVG SEEK/TRK-TRK   LATENCY   RAW DATA RATE
(MB) FMT        (MS)             (MS)       (MB/S)

Century Data: 600        15/3.5           8.33         12.3
(5 modified SMD channels, 8in)

Fujitsu Eagle 1000       16/4             8.25         18
(6 modified ESMD channels, 10.5in)

IBIS         2800        16/2.5           8.33         12
(2 proprietary channels, 14in, VME bus connected)

4.  Optical disks

3M/Sony agreed to have a ISO standard (130 MM) read/write medium. Most other R/W optical manufactures probably will join this standard. This definitely has very positive impact to r/w optical disk industry. The WORM (write once) disk industry does not have a standard now. The r/w optical disk form factor probably will change to 86 mm in 1991.

3M has announced a 650MB (two sides) optical r/w cartridge. It is priced at $250 in single unit. The price of one disk medium is expected to drop to $50 or even $20 in the future, depends on the volume of the market, but is always more expensive then a CD-ROM medium. 3M claims that the medium is reliable. The archival and shelf life are more than ten years. The Erase/Write/Read cycle can sustain more than one million times.

Several venders offers the optical r/w (rewritable) disk drives. The price tage is still high (in the range of $4000). Several venders offers jukebox (WORM) for large on-line storage. The concept will apply to rewritable disks soon, may be even has a mixture of different optical disk technologies within a jukebox.

One interesting application for r/w optical disks comes from a start-up Epoch Systems. It offered a LAN file server, called Epoch-1 Infinite Storage Server, which is a general-purpose file server that implements a hierarchy of solid state, magnetic disk, and optical disks.

The access time of rewritable optical disks is in the range of 50-100 ms, the best data rate is about 1MB/S for a READ and about half of that for a WRITE. This data rate is better than a tape. In addition, optical disk has a random-access capability. In the seminar, most people agree that rewritable optical disks are not just a promise. The emerging products from system hardware and application software houses will speed up the acceptance of this technology. The price will drop and the performance will improve. The CD-ROM probably will survive forever. The WORM probably only serves as a niche as in medical, CD-ROM master mask market, etc.

5. SCSI 2

I talked to NCR and Western Digital about their plans of emerging SCSI 2 chip. They both indicate they have a prototype in the lab and is waiting for ANSI SCSI II standard. The SCSI 2 chip introduction (command queueing, higher data rate, etc) will within one quarter when ANSI announces the standard. The first product probably will be one-byte SCSI II (9-10 MB/S). However, the two-byte SCSI II will be easy to implement.

Before the SCSI II chips announcement, Emulex will announce a 6-7 MB/S SCSI I chip in 1989.


... snip ... top of post, old email index
--
virtualization experience starting Jan1968, online at home since Mar1970

Fear the Internet, was Cool Things You Can Do in z/OS

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Fear the Internet, was Cool Things You Can Do in z/OS
Newsgroups: bit.listserv.ibm-main
Date: 13 Apr 2011 13:45:38 -0700
mike.a.schwab@GMAIL.COM (Mike Schwab) writes:
Writing the SE Linux was done with a National Security Agency (No Such Agency) (NSA) research grant.
http://www.nsa.gov/research/selinux/


also from long ago and far away:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

--
virtualization experience starting Jan1968, online at home since Mar1970

Fear the Internet, was Cool Things You Can Do in z/OS

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Fear the Internet, was Cool Things You Can Do in z/OS
Newsgroups: bit.listserv.ibm-main
Date: 13 Apr 2011 15:10:29 -0700
scott_j_ford@YAHOO.COM (Scott Ford) writes:
??????? whats it XMASCARD

recent post
https://www.garlic.com/~lynn/2011b.html#9

mentions:

there was xmas exec on bitnet in nov87 ... vmshare archive
http://vm.marist.edu/~vmshare/browse.cgi?fn=CHRISTMA&ft=PROB

and was almost exactly a year before (internet) morris worm (nov88)
https://en.wikipedia.org/wiki/Morris_worm

xmas exec was social engineering ... similar to some current exploits which advertise something that victim has to download and then (manually) explicitly execute (requires victim's cooperation).

some additional
https://www.garlic.com/~lynn/2011b.html#10

misc. past posts mentioning bitnet (&/or earn)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

which used technology similar to the corporate internal network (larger than arpanet/internet from just about the beginning until late '85 or early '86):
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... and
https://www.garlic.com/~lynn/2011f.html#23
"fix" previous reference (missing trailing "l"):
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

in '89 there was several messages that were sent out that major internal corporate (MVS-based) administrative systems had fallen victim to a virus ... however after several iterations it was eventually announced that the systems were suffering from some bug.

this selection of some internet related items/posts starts out with reference to corporate installed email gateway in fall of '82
https://www.garlic.com/~lynn/internet.htm

--
virtualization experience starting Jan1968, online at home since Mar1970

Fear the Internet, was Cool Things You Can Do in z/OS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Fear the Internet, was Cool Things You Can Do in z/OS
Newsgroups: bit.listserv.ibm-main
Date: 13 Apr 2011 16:08:01 -0700
scott_j_ford@YAHOO.COM (Scott Ford) writes:
Well 1987 wow before the real firewalls. Security was on the inbound/outbound dial devices. Also worked VM, cut my teeth on VM/SP1 , loved VM, still do, I can how a exec would cause major pain in a VM system, no doubt. z/OS would be a bit tougher I would think, plus a pre-req would be enough knowledge to get in and be able to execute, plus passwords and ids...A lot of research and work ...just to hack a MF

re:
https://www.garlic.com/~lynn/2011f.html#23 Fear the Internet, was Cool Things You Can Do in z/OS
https://www.garlic.com/~lynn/2011f.html#24 Fear the Internet, was Cool Things You Can Do in z/OS

big difference between internal network and the internet in the 80s ... was that all internal network links (that left corporate premise) had to be encrypted. could be a big pain ... especially when links crossed certain national boundaries. in the mid-80s, it was claimed that the internal network had over half of all link encryptors in the world.

company also did custem encrypting PC (2400 baud) modems for corporate home terminal program. there is folklore that one high-ranking (EE graduate) executive was setting up his own installation at home. supposedly at one point he stuck his tongue in rj11 jack (to see if their was any juice ... old EE trick) ... just as the phone rang. After that there was a corporate edict that all modems made by the company had to have the jack contacts recessed sufficiently so babies (and executives) couldn't touch them with their tongue.
https://en.wikipedia.org/wiki/RJ11

i had an HSDT (high-speed data transport) project and was dealing with T1 links & higher speed. T1 link encryptors were really expensive ... but you could get them ... but I to start work on my own to go significantly faster. misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

big difference in worms/viruses (and various other exploits) and social engineering ... is that social engineering requires active participation by the victim (current flavors frequently advertise download & execute things, frequently of very dubious nature; games, videos, etc). allowing users to execute arbitrary (unvetted) programs was identified as vulnerability at least back in the 70s (if not the 60s).

somewhat more recent thread (with some of my comments copied from another venue)
https://www.garlic.com/~lynn/2011f.html#14 How is SSL hopelessly broken? Let us count the ways

How is SSL hopelessly broken? Let us count the ways; Blunders expose huge cracks in net's trust foundation
http://www.theregister.co.uk/2011/04/11/state_of_ssl_analysis/

with regard to above ....

we had been called in to consult with small client/server startup that wanted to do payment transactions on their server, they had also invented this technology called SSL they wanted to us (the result is now frequently called "electronic commerce"). By the time we were finished with the deployments ... most of the (current) issues were very evident.

very early in the process I had coined the term "comfort certificate" ... since the digital certificate actually created more problems than it solved ... in fact, in many cases, it was totally redundant and superfluous ... and existed somewhat as magic "pixie dust" ... lots of old posts mentioning SSL digital certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

There was two parts of SSL deployment for "electronic commerce" ... between the browser and webserver and between the webserver and the "payment gateway" ... I had absolute authority over interface deployment involving "payment gateway" ... but had only advisery over browser/webserver. There were a number of fundamental assumptions related to SSL for browser/webserver secure deployment ... that were almost immediately violated by merchant webservers (in large part because of the high overhead of SSL cut their throughput by 85-90%). I had mandated mutual authentication for webserver/gateway (implementation didn't exist originally) and by the time deployment was done the use of SSL digital certificates was purely a side-effect of the crypto library being used.

the primary use of SSL in the world today is electronic commerce for hiding payment transaction information. The underlying problem is the transaction information is dual-use both authentication and needed by dozens of business processes in millions of of places around the world. In the X9A10 financial working group we (later) directly addressed the dual-use problem with the x9.59 financial standard (directly addressing the problem, x9.59 is significantly lighter weight than SSL as well as KISS). This eliminated the need to hide the transaction details ... also eliminates the threat from majority of data breaches (doesn't eliminate data breaches, just eliminates crooks being able to use the information for fraudulent purposes). Problem (as always) is there is significant vested interests in the current status quo.

--
virtualization experience starting Jan1968, online at home since Mar1970

First 5.25in 1GB drive?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First 5.25in 1GB drive?
Newsgroups: alt.folklore.computers
Date: Wed, 13 Apr 2011 19:32:51 -0400
re:
https://www.garlic.com/~lynn/2011f.html#22 First 5.25in 1GB drive?

found in old email log ... also just did search and found thread in google usenet archive
Date: 16 Nov 89 07:46:10 GMT Newsgroups: comp.periphs
Subject: Re: Need info about CDC Wren hard disks
Summary: a summary of interesting imprimis offerings

this is a brief summary of the latest offerings from imprimis. as always, i take care but no responsibility for details; call your imprimis rep. IN PARTICULAR, the prices are for me as an at&t person; we get huge volume discounts, you should probably add 50-75% to get a more available price. the data comes from data sheets and salespeople. caveat empor.

wren vii: the latest in a long line of disks. 5.25in 1.2GB SCSI disk. average seek 16.5ms, 40KH MTBF. sustained 1.7MB/s. available now as evaluation units at $3294, probable eventual cost of ~$2500 ($2/MB).

elite: new range of 5.25in disks (eventually replacing the wren's). comes in SMD, IPI-2 and SCSI-2 interfaces. capacity is 1.2GB (1.5GB scsi). latency is <6ms, average seek 12ms. sustained transfer of 3MB/s. 100KH MTBF. smd evaluation units in jan, scsi production in apr/may ($4.5-5K).

sabre 2hp: new version of the regular 8in sabre; 1.2GB and 50KH MTBF. IPI-2 interface, sustained 6MB/s (twice regular sabres). latency 8.3ms, ave seek 15ms. these are shipping now, $7.4K.

sabre 2500: 2.5GB, evaluation jan/feb, seek 13ms, MTBF 100KH, 3MB/s, $8K.

arraymaster 9058: (base for imprimis's raid). this controller ($15K, at beta sites now) connects to drives (any kind, speed) via IPI-2 and connects to a host via IPI-3. assuming fast drives like sabre 2hp, host data rates are 25MB/s peak, 22MB/s sustained. imprimis will be selling a couple of packages based on this controller; a small pseduo-disk of 5GB, 20MB/s sustained transfer, and a larger disk 16GB, with two 18MB/s sustained i/o ports. both these packages have a lot of internal error correction, a mean time to data loss of ~114yrs.

P.S. i note in passing that the WREN V and WREN VI were plagued with early firmw problems regarding bus timeouts on long I/O transfers. these have been fixed (my drives were fixed under warranty) and new drives should be okay. but be wary of older drives.


... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

First 5.25in 1GB drive?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First 5.25in 1GB drive?
Newsgroups: alt.folklore.computers
Date: Wed, 13 Apr 2011 20:08:20 -0400
re:
https://www.garlic.com/~lynn/2011f.html#22 First 5.25in 1GB drive?
https://www.garlic.com/~lynn/2011f.html#26 First 5.25in 1GB drive?

Date: Fri, 21 Sep 90 11:46:19 EST
From: wheeler
Subject: 18Sep90 Commercial Analysis NewsNotes

MICROSCIENCE INTERNATIONAL CORP BUYS SIEMENS DRIVE LINES

MICROSCIENCE has entered the high-capacity end of the 5.25-inch disk drive market by acquiring manufacturing and marketing rights for the Megafile product line from SIEMENS AG of Germany. Siemens had abandoned its 5.25-inch OEM drive business in late May, citing price competition and high operating costs as reasons. However, even before that decision was made, Siemens was negotiating for Microscience to build its 777-MByte and 1.2-GByte Megafile drives under an OEM contract, setting the stage for this agreement. The deal includes most of the manufacturing line equipment Siemens used to build the files in Germany, which Microscience will relocate to their facilities in Taiwan. Remaining parts inventory is said to be minimal and Microscience does not intend to hire any of the Siemens employees who worked on the project. However, Siemens is expected to work closely with Microscience on product development through June of next year. "We consider this to be an important part of the deal", said Kevin Nagle, Microscience president and chief executive.

Microscience expects to ship 777-MByte and 1.2-GByte drives in volume by December. Although prices were not available at this time, Siemens had priced the drives between $1800 and $2000 in OEM quantities. A 1.6-GByte drive is currently in development with evaluation units planned for December. This drive is expected to feature a 13-ms seek time and come with either a SCSI or ESDI interface. (Electronic News 9/3/90, p. 19)

OEM DASD

COMMENT: Most of the large OEM contracts for the 760-MByte level have been made, but the 1.2- and 1.6-GByte contracts are largely still open. If Microscience can maintain the Siemens quality and, with Taiwanese manufacturing, reduce the cost sufficiently to become price competitive, they will have a chance at some of the design-in contracts. IDC estimates that unit shipments of 5.25-inch drives of greater than one gigabyte will increase to about 45,000 by 1993 from approximately 20,000 this year.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

US military spending has increased 81% since 2001

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 14 Apr, 2011
Subject: US military spending has increased 81% since 2001
Blog: Facebook
US military spending has increased 81% since 2001; adjusted for inflation
http://bit.ly/fFQiWV

some more detail:
http://www.cdi.org/program/document.cfm?documentid=4623

note cdi.org has moved to
http://www.pogo.org/straus/
... missing $1T is now
http://www.pogo.org/straus/issues/defense-budget/2010/what-did-the-rumsfeld-gates-pentagon-do-with-1-trillion.html

note cdi.org has moved to
http://www.pogo.org/straus/
... missing $1T is now
http://www.pogo.org/straus/issues/defense-budget/2010/what-did-the-rumsfeld-gates-pentagon-do-with-1-trillion.html

Winslow (no relation) may be just saying that Gates may be portrayed that way if nothing substantive happens (which may be out of Gates control). more detailed analysis (winslow one of the authors)
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html

... written by many of Boyd's "acolytes" (in the past, I had sponsored Boyd's briefings at IBM)

Gates got some points for referencing Boyd (a number of times) in this speech:
http://www.defense.gov/speeches/speech.aspx?speechid=1443

I have way too many posts and references to Boyd here:
https://www.garlic.com/~lynn/subboyd.html

thread about military leaking into commercial
http://lnkd.in/6Kefvg

... by one of the labyrinth authors:
http://chuckspinney.blogspot.com/2011/02/why-boeing-is-imploding.html

... CSPAN had broadcast interview with several of the authors ... where the 787 scenario was highlighted

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP Available on MVS When?

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TCP/IP Available on MVS When?
Newsgroups: bit.listserv.ibm-main
Date: 14 Apr 2011 08:45:07 -0700
Steve_Conway@AO.USCOURTS.GOV (Steve Conway) writes:
OK, let's invoke Jaffe's Law (Any ibm-main discussion will eventually become a history lesson) immediately.

In this case, I need a history lesson, preferably with citable references.

When (year and OS release, if available) did TCP/IP become available for VM? For MVS?

No forum is more perfectly suited for my question. :-)


the company product was done on VM and implemented in vs/pascal (5798-FAL). it had a number of thruput issues ... but I did the RFC1044 enhancements and in some testing at Cray research ... between a 4341 and cray ... got channel media sustained thruput using only modest amount of 4341 cpu (about 500 times improvement in instructions executed per byte moved). misc. past posts mentioning rfc 1044
https://www.garlic.com/~lynn/subnetwork.html#1044

vmshare reference to 5798-fal:
http://vm.marist.edu/~vmshare/browse.cgi?fn=TCPIP&ft=PROB

the base implementation was later made available on MVS by moving over the VM code and writing a simulation for some of the VM functions.

later there was a contract to implement tcp/ip support in VTAM. the folklore is that when the implementation was first demo'ed ... the company said that it was only paying for a "correct" implementation ... and everybody knows that a "correct" tcp/ip implementation is significantly slower than LU6.2 (not significantly faster). The contract was handled by "local" ibm office in Palo Alto Sq office bldg.

now predating products ... there was various univ. implementations ... reference to tcp/ip at UCLA MVS in late 70s:
https://en.wikipedia.org/wiki/Bob_Braden

predates the great switchover from host/imp protocol to tcp/ip protocol on 1jan83.

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP Available on MVS When?

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TCP/IP Available on MVS When?
Newsgroups: bit.listserv.ibm-main
Date: 14 Apr 2011 12:24:21 -0700
lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
the company product was done on VM and implemented in vs/pascal (5798-FAL). it had a number of thruput issues ... but I did the RFC1044 enhancements and in some testing at Cray research ... between a 4341 and cray ... got channel media sustained thruput using only modest amount of 4341 cpu (about 500 times improvement in instructions executed per byte moved). misc. past posts mentioning rfc 1044
https://www.garlic.com/~lynn/subnetwork.html#1044


re:
https://www.garlic.com/~lynn/2011f.html#29 TCP/IP Available on MVS When?

part of the issue was that the base support shipped with a box that was basically a channel-attached bridge (similar but different to 3174 boxes that supported LANs) ... so the host stuff had to do all the ARP/MAC/LAN layer gorp ... rfc1044 support was channel attached tcpip-router ... so a whole protocol layer was eliminated from host processing.

well before the tcp/ip support in vtam ... there was considerable misinformation regarding sna/vtam flying about ... including being useable for NSFNET backbone (operational precursor to the modern internet).

recent references to SNA/VTAM misinformation from the period:
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#34 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#43 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#77 Internet pioneer Paul Baran
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group

misc. old email regarding working with NSF on various activities leading up to NSFNET backbone:
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

This has postings regarding various announcements
http://vm.marist.edu/~vmshare/browse.cgi?fn=IBMNEW89&ft=MEMO

from above posted 9/21/84:
VM Interface Program for TCP/IP (5798-DRG): Provides VM user the capability of participating in a network with TCP/IP transmission protocol. Includes ability to do file transfers, send mail, and log on remotely to VM hosts. (Comment: It's not clear whether this equals VM access to non-VM hosts such as are found on ARPANET. I believe this is the same product as WISCNET, already available to academic shops.)

... snip ...

and from above posted 4/22/87:
IBM also announced the new TCP/IP facility (5798-FAL) on 4/21/87. This package replaces the old program (5798-DRG) and includes some programs for PCs. The announcement is 287-165. To quote: "... IBM TCP/IP for VM provides the VM/SP, VM/SP HPO, or VM/XA SF user with the capability of participating in a multi-vendor Internet network using the TCP/IP protocol set. This protocol set is an implementation of several of the standard protocols defined for the Defense Advanced Research Projects Agency. The use of these protocols allows a VM user to interface with other systems that have implemented the TCP/IP protocols. This connectivity includes the ability to transfer files, send mail, and log on to a remote host in a network of different systems. The IBM TCP/IP for VM program uses a System/370 channel attached to a variety of controllers or devices for connection to the selected network. The network protocols supported are IBM Token-Ring, Ethernet(1) LAN, ProNET(2) and DDN X.25. IBM TCP/IP for VM offers IBM TCP/IP for the PC as an optional feature, allowing the user of an IBM personal computer on an IBM Token-Ring or Ethernet LAN to communicate with the VM system using the TCP/IP protocols." Announced devices supported are the IBM Series/1, 7170 DACU, and 9370 LAN adapters (Token Ring or Lan)

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP Available on MVS When?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TCP/IP Available on MVS When?
Newsgroups: bit.listserv.ibm-main
Date: 15 Apr 2011 07:48:43 -0700
lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
part of the issue was that the base support shipped with a box that was basically a channel-attached bridge (similar but different to 3174 boxes that supported LANs) ... so the host stuff had to do all the ARP/MAC/LAN layer gorp ... rfc1044 support was channel attached tcpip-router ... so a whole protocol layer was eliminated from host processing.

well before the tcp/ip support in vtam ... there was considerable misinformation regarding sna/vtam flying about ... including being useable for NSFNET backbone (operational precursor to the modern internet).


re:
https://www.garlic.com/~lynn/2011f.html#29 TCP/IP Available on MVS When?
https://www.garlic.com/~lynn/2011f.html#30 TCP/IP Available on MVS When?

additional trivia drift ... this is html version of internal IOS3270 "green card" (material from GX20-1850-3 and other sources)
https://www.garlic.com/~lynn/gcard.html

IOS3270 was cms application used for many applications ... including all the service panels in the (4361/cms) 3090 "service processor" ... old email reference
https://www.garlic.com/~lynn/2010e.html#email861031

above is with respect to including a failure analysis/debug tool (I had written) as part of 3092.

there had been a "blue card" for the 360/67 ... that included details of 67 features (like virtual memory support, multiprocessor control register detail) ... it also included sense data for several devices. I had provided the GCARD author with the sense data information. I had also included the sense data for the channel-attached tcp/ip router.

I still have a "blue card" in a box someplace ... obtained at the science center from a fellow member (his name is "stamped" on the card). GML had been invented at the science center in 1969 ... GML actually is the first letter of the last names of the inventors (one of which is "stamped" on my blue gard). A decade later, in the late 70s, GML becomes an ISO standard as SGML.
https://www.garlic.com/~lynn/submain.html#sgml

Another decade & SGML morphs into HTML:
http://infomesh.net/html/history/early/
and the first webserver outside CERN, is on the SLAC VM/CMS system:
http://www.slac.stanford.edu/history/earlyweb/history.shtml

part of the significant sna/vtam misinformation campaign in the late 80s was to get the internal network converted over to sna/vtam. The campaign to convert the internal network backbone to sna/vtam was so thick that the backbone meetings were being restricted to management only (and technical people were being excluded) ... recently posted old email:
https://www.garlic.com/~lynn/2011.html#email870306
in this post (linkedin Greater IBM discussion)
https://www.garlic.com/~lynn/2011.html#4 Is Email dead? What do you think?

In my HSDT effort I was doing multiple high-speed links and needed sustained aggregate thruput in excess of channel speeds (load spread across multiple mainframe channels).
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Internal network VNET/RSCS used vm spool file system for intermediate storage. The API was synchronous and on heavily loaded system could limit thruput to 4-6 4kbyte blocks per second. I needed at least 100 times that thruput and had done a new API and (vm spool file) infrastructure to support sustained aggregate link thruput equivalent to multiple channels.

The internal network had been larger than arpanet/internet from just about the beginning until late '85 or early '86.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and similar technology was used for BITNET/EARN (where this mailing list originated).
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Rather than converting the internal network links to SNA/VTAM, it would have been sigificantly better, cost/effective, and efficient to have converted the internal network links to tcp/ip.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 15 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

re: 747/cessnas ... the mega-datacenters ... is possible analogous to operating millions of bullet trains with a couple dozen operators.

a little more ("vax-killer") 9370 trivia from old email in 87:
https://www.garlic.com/~lynn/2011e.html#30

Recent reference to having worked on doing 14% thruput improvement on 450K+ statement cobol application that ran overnight batch. There were 40-some bloated CECs sized for the overnight batch window (@$30+m per). They were doing frequent hardware upgrades with nothing older than 18months.
https://www.garlic.com/~lynn/2011c.html#35

In 1980, (internal) STL (development lab) was bursting at the seams and they were in the process of moving 300 people from the IMS group to remote bldg ... with 3270 terminals connected back to STL datacenter. They had tried "remote" 3270 and found the human factors absolutely deplorable. I volunteered to write support for non-IBM channel-extender that allowed "local" 3270s to operate (at the remote bldg) connected back to the STL mainframes.

One of the things I did was simulate channel-check for certain kinds of unrecoverable transmission errors (channel extender operation included microwave telco transmission segment) ... which were fall through standard operating system retry operations. The vendor then tried to (unsuccessfully) talk IBM into allowing my support be shipped to customers. Finally they re-implemented the same design from scratch.

Role forward to the first year of 3090 customer installs and I'm contacted by 3090 product manager. There is a service that collects customer mainframe EREP (error reporting) data and generates regular summary reports of the information. The 3090 was designed to have an aggregate total of 5-6 channel check errors (in aggregate across all customer 3090s for period of year). It turns out that an aggregate of 20 channel check errors had been reported across all customers during the first year of operation ... the additional errors coming from the channel-extender software (simulating channel check error for unrecoverable transmission errors). I was asked if I could get the vendor to change their software emulation so it wouldn't simulate channel checks. After some amount of research, I determined that simulating IFCC (interface control check) would effectively result in the same EREP processing (w/o messing up the 3090 channel check reports).

at this NASA dependable computing meeting
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

I ask the other vendors if any of them have reports of every error on every installed machine (and/or are they designed to have less than aggregate of dozen specific kinds of errors across all installed machines for period of year). Part of high integrity infrastructure starts with detailed instrumentation and corresponding reports ... in order to actually know what is going on.

Note that field engineering has had bootstrap, incremental diagnostic process that starts with scoping part in field. Starting with 3081 and TCMs, it was no longer possible to probe much of the hardware (inside TCM). As a result, "service" processor was introduced ... that the field engineer could "scope" the service processor ... and then use the "service processor" to diagnose the rest of the machine. For 3090, the "service processor" was initially "upgraded" to being a 4331 running a highly customized version of vm370/cms. Later the 3090 "service processor" (aka 3092) was upgraded to a pair of (redundant) 4361s (both running highly customized version of vm370/cms).

Lots of today's infrastructure involve network operation with large number of components using DBMS transaction semantics ... and network environment includes transmission retry. In such an environment ... it is possible to have five-nines high availability with redundant/fall-over systems ... where individual components just need to be designed to fail-safely.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

We complained long & loud that the 3274/3278 was much worse for interactive computing than the 3272/3277 it was replacing. Eventually we got an answer back that 3274/3278 wasn't targeted for interactive computing, it was targeted for data entry.

Late 80s, a senior disk engineer got a talk scheduled at the annual, world-wide internal communication group conference ... and opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had a stranglehold on the datacenter and also had strategic ownership of products that "crossed" the datacenter walls. In disk sales numbers, the disk division saw leading edge of tsunami data fleeing the datacenter for more distributed computing friendly platforms. The disk division had come up with several products to address the problem, but were constantly being vetoed by the communication group (since they had strategic ownership for products that involved crossing datacenter walls). misc. past posts mentioning stranglehold the communication group had on the datacenter
https://www.garlic.com/~lynn/subnetwork.html#emulation

In the late 80s, while the communication group was blocking datacenter involvement in client/server, we had come up with 3-tier network architecture (middle layer), was including it in gov. request responses and out pitching it to customer executives (and were taking constant barbs in the back from the communication group) ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

The original SQL/relational implementation was System/R in bldg. 28. There use to be "friendly" arguments between IMS down in STL and System/R. The ("physical") IMS group argued that relational doubled the disk space requirements (for the "implicit" indexes) ... and also greatly increased the disk i/os (also for reading the "implicit" indexes). The System/R group countered that exposing the explicit record pointers (as part of the data) greatly increased manual maintenance.

In the 80s, there was significant increase in system real storage sizes (allowing caching of the relational indexes, significantly reducing relational i/o overhead penalty) and significant decrease in the disk cost per byte (significantly reducing index cost penalty). At the same time people expense was going up and number of skilled people failing to keep up with the customer appetite for DBMS. SQL/Relational reduced the people time&skill for DBMS support (compared to 60s "physical" DBMS). misc past posts mentioning original relational/SQL
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Early mainframe tcp/ip support (from ibm-main mailing list)

From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Apr, 2011
Subject: Early mainframe tcp/ip support (from ibm-main mailing list)
Blog: Old Geek Registry
re:
http://lnkd.in/UkVnRg
and
https://www.garlic.com/~lynn/2011f.html#29
https://www.garlic.com/~lynn/2011f.html#30
https://www.garlic.com/~lynn/2011f.html#31

We had been working with director of NSF for some time ... loosely around a project I called HSDT ... with many of the sites that were to become part of the NSFNET backbone ... and were meeting lots of internal opposition. At one point, NSF had budget proposal for HSDT to get $20M ... but congress cut back on budget for such activity. Then the internal opposition really kicked in (along with lots of SNA/VTAM misinformation in many areas) ... and we weren't allowed to participate in the actual NSFNET backbone at all. Misc. past email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

The NSFNET T1 backbone RFP specified "T1" (in large part because I already had T1 up and running production). The winning bid actually put in 440kbit links and then possibly to try and meet the letter of the RFP ... but in T1 trunks with telco multiplexors managing multiple 440kbit links over the T1 trunks (some sniping about why they couldn't call it a T5 network ... since some of the T1 trunks may have been in turn, multiplexed over T5 trunks).

Then when there was the NSFNET-II backbone (upgrade to T3), I was asked to be the red team (possibly anticipating that they would be able to shutdown my sniping) and there were a couple dozen people from a half-dozen labs around the world for the blue team. At the final review, I presented first. Then 5mins into the blue team presentation, the person running the review ... pounded on the table and said that he would lay down in front of a garbage truck before he allowed any but the blue team proposal to go forth. misc. past posts mentioning NSFNET activity
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

semi-related recent post (mentioning talk in the late 80s about the communication group was going to be responsible for the demise of the disk division):
https://www.garlic.com/~lynn/2011f.html#33
in this Mainframe Experts discussion
http://lnkd.in/mk79ZS

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear
Blog: Greater IBM
re:
http://lnkd.in/mk79ZS
and
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Possibly because GAO didn't believe SEC was doing anything ... they started doing reports of public company financial filings that they felt were fraudulent and/or significant accounting errors ... there was even uptick after SOX. Apparently the motivation was boosting executive bonuses ... and even if the filings were later corrected, the bonuses weren't adjusted.

http://www.gao.gov/new.items/d061053r.pdf ,
https://www.gao.gov/products/gao-06-1079sp

recently from somewhere on the web: "Enron was a dry run and it worked so well it has become institutionalized"

a somewhat similar picture was painted in congressional hearings into Madoff ponzi scheme by the person that had tried for a decade to get SEC to do something about Madoff.

and then Cramer's comments whether SEC would ever do anything about illegal naked short selling
http://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/

SOX also required SEC to do something about the rating agencies ... that played pivotal role in the recent financial mess ... but nothing appears to have been done except report:
http://www.sec.gov/news/studies/credratingreport0103.pdf

some of what actually went on came out in fall2008 congressional testimony ... but there appeared to be little followup ... although just recently there is this:
http://www.computerworlduk.com/news/it-business/3274277/bank-email-archives-thrown-open-in-devastating-financial-crash-report/1

possibly attempting to explain little or nothing being done:
http://blogs.forbes.com/neilweinberg/2011/04/14/corrupt-bank-oversight-is-creating-new-immoral-hazard/
http://www.nytimes.com/2011/04/14/business/14prosecute.html?_r=2&hp

minor nit ... austin romp (801) was originally targeted as displaywriter follow-on ... when that was killed, the box was retargeted for the unix workstation market ... and the company that had done the AT&T unix port for ibm/pc pc/ix was hired to do "AIX" port (release 2) along with PC/RT. That morphed into AIX (release 3) for power/rios.

Somewhat in parallel, the palo alto group had been working with UCLA Locus system (unix work-alike that supported both distributed file system and distributed process execution). The palo alto group ported UCLA Locus to both mainframe and 386 ... and was released as product called aix/370 and aix/386.

People in Palo Alto had also been working with Berkeley with their UNIX work-alike (BSD) ... which was originally going to be ported to mainframe and released. However, they got redirected to do the port to the PC/RT instead and it was released as "AOS" (as a PC/RT alternative to AIX).

There was also a number of groups working with CMU on their AFS (distributed filesystem) and MACH (CMU's unix work-alike). A number of companies picked up MACH for their "unix" system ... including Jobs at NeXT. When Jobs returned to Apple, be brought MACH along and it morphed into the (new) Apple (MAC) operating system.

--
virtualization experience starting Jan1968, online at home since Mar1970

Early mainframe tcp/ip support (from ibm-main mailing list)

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 16 Apr, 2011
Subject: Early mainframe tcp/ip support (from ibm-main mailing list)
Blog: Old Geek Registry
re:
http://lnkd.in/UkVnRg

recent post I did on facebook:

FTP is 40 years old
http://www.bit-tech.net/news/hardware/2011/04/15/ftp-is-40-years-old/1
FTP still forms the backbone of many parts of the Internet, despite its age. The backbone of the Internet, FTP (file transfer protocol), celebrates its 40th birthday tomorrow. Originally launched as the RFC 114 specification, which was published on 16 April 1971, FTP is arguably even more important today than when it was born.

... snip ...

one of my responses to some of the comments:
File Transfer Protocol (rfc959) talks about the history (sec2.1) ... as well as some issues/problems caused by remapping to TCP (including needing separate control & data streams, rfc765). who does separate control&data streams on tcp these days?

... snip ...

aka arpanet host/imp protocol provided for an "out-of-band" control mechanism ... which wasn't carried over into tcp/ip ... so remapping FTP to TCP/IP they added a separate session for the control traffic.

for other references ... my IETF RFC index
https://www.garlic.com/~lynn/rfcietff.htm

at interop '88 (tcp/ip interoperability in 1988), there was significant number of OSI (various x.nnn stuff) in various booths ... probably because the gov. had mandated the elimination of tcp/ip and change-over to gosip. I had a pc/rt in a booth ... but not IBM's ... it was at right angle adjacent to the SUN booth (in the central isle) and Case/SNMP (still had not won the monitor wars) was in the SUN booth. Case was talked into coming over and doing SNMP build/install on the PC/RT. misc. past posts mentioning Interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

as mentioned there was huge SNA/VTAM misinformation at the time ... part of it to convert the internal network to SNA/VTAM (bitnet/earn was using similar technology to the internal network) ... including lots of stuff that SNA/VTAM would even be applicable to NSFNET backbone. Reference to significant amount of email that had been collected to SNA/VTAM misinformation:
https://www.garlic.com/~lynn/2006w.html#email870109

other references to SNA/VTAM misinformation ... for the internal network ... they had change the internal backbone meetings to exclude the technical people and have managers only
https://www.garlic.com/~lynn/2006x.html#email870302 ,
https://www.garlic.com/~lynn/2011.html#email870306

for other drift ... old email about getting EARN going:
https://www.garlic.com/~lynn/2001h.html#email840320

part of the above was because I had worked with the person on cp67 in the early 70s. then I was blamed for online computer conferencing on the internal network in the late 70s and early 80s (folklore was that when the executive committee was told about online computer conferencing and the internal network; 5of6 wanted to fire me immediately). Part of the activity was referred to as Tandem Memos ... from IBM Jargon:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products.

... snip ...

some rumors about Tandem Memos even leaked outside IBM and there was article in Nov81 Datamation.

Note one of the differences between IETF (tcp/ip) and ISO (OSI) was that IETF required different interoperable implementations before standards progression ... while ISO didn't require that a standard even be implementable before passing as standard.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts
re:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#35 At least two decades back, some gurus predicted that mainframes would disappear

also:
http://lnkd.in/mk79ZS

"VM IS DEAD" tome from 1989 (from employee involved in development) containing some of the items that contributed to view that mainframe was dying
https://www.garlic.com/~lynn/2011e.html#95

vm/4300 sold head-to-head against vax/vms in the mid-range market over the whole period in about similar numbers ... in orders involved small number of machines ... place where 43xx outdid vax/vms was in large corporate orders involving multiple hundred of machines. The large 43xx orders were being placed out in every corporate nook&cranny ... internally it contributed to dept. conference rooms becoming scarce commodity (so many being taken over for 43xx dept. machines). It was the leading wave of distributed computing. Old 43xx email (back to before 43xx announced & shipped)
https://www.garlic.com/~lynn/lhwemail.html#43xx

By the mid-80s, the mid-range market was starting to be taken over by workstations and large PC. The 4331/4341 follow-ons (4361 & 4381) were expected to see similar large sales ... which never materialized. This is decade of vax numbers sliced & diced by year, model, us/world-wide ... showing the big plunge in the mid-80s (something similar was seen by 43xx):
https://www.garlic.com/~lynn/2002f.html#0

--
virtualization experience starting Jan1968, online at home since Mar1970

VM IS DEAD tome from 1989

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: VM IS DEAD tome from 1989
Newsgroups: alt.folklore.computers
Date: Sun, 17 Apr 2011 17:22:22 -0400
VM IS DEAD
https://www.garlic.com/~lynn/2011e.html#95

and something similar in thread about predictions from early 90s that the mainframe was dead
https://www.garlic.com/~lynn/2011f.html#37

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 17 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts
re:
https://www.garlic.com/~lynn/2011f.html#37 At least two decades back, some gurus predicted that mainframes would disappear
also:
http://lnkd.in/mk79ZS

The "VM IS DEAD" tome (ala vm370) was written in 1989 when it appeared to be on its death bed. All sorts of data was fleeing the datacenter (given the stranglehold that the communication group had on the datacenter) ... behind the talk by senior disk engineer gave about the communication group was going to be responsible for the demise of the disk division ... also behind the rumors in the early 90s about the death of the mainframe.

In the 90s, a few operations found that they couldn't easily move some kinds of data off the mainframe which allowed it to linger on ... and around the start of the century, virtual linux breathed a whole new life into VM.

In the wake of the failed Future System project ... there was mad rush to do a number of things ... XA-architecture (code name of "811" for nov78 date on many of the documents), 303x (3031 was 158-3 with new covers, 3032 was 168-3 with new covers and 3033 which started out as 168-3 logic/wiring remapped to faster chips), and 3081 (which would implement some of "811"). Some details about FS, XA-architecture, 303x, 3081, and "811" here:
http://www.jfsowa.com/computer/memo125.htm
and
https://people.computing.clemson.edu/~mark/fs.html

Head of POK had managed to convince corporate to kill off VM in the mid-70s, shutdown the development group and move all the people to POK as part of MVS/XA development (or otherwise MVS/XA would miss its ship scheduled). There was joke about head of POK being major contributor to VAX/VMS since so many people from the development group left and went to work on VMS (rather than move to POK). Endicott managed to save the VM product mission, but had to reconstitute a development group from scratch.

The VM thing in 3081/XA was SIE ... but it was never intended for customer use ... purely for internal only tool supporting MVS/XA development. Because of limited microcode space in 3081, SIE instruction microcode was "paged" ... giving it very bad performance (but performance wasn't an issue because it was originally purely for internal MVS/XA development and test).

SIE was significantly enhanced in 3090 and extended with PR/SM (the microcode base for LPARs) in response to Amdahl's "hypervisor" (and finally given "real" life).

There was a small amount of "811" retrofitted to 3033 by the person credited with Itanium (was at IBM at the time and then left to work for HP) ... called dual-address space mode ... which was specifically to address an enormously growing problem in MVS with the Common Segment (CSA) ... aka had nothing to do with VM.

Virtual machines were originally done by the science center on a specially modified 360/40 (with virtual memory hardware support) ... and called cp/40. Then when an "official" 360 with virtual memory hardware started shipping, the science center replaced its 360/40 with 360/67 and cp/40 morphed into cp/67 .... and the science center started making it available to customers. Misc. past posts mentioning the science center (4th flr, 545 tech sq)
https://www.garlic.com/~lynn/subtopic.html#545tech

The cp/67 group split off from the science center and took over the Boston Programming Center on the 3rd flr ... and started the morph from cp67 to vm370. When the vm370 group outgrew the 3rd flr ... they moved out into the old SBC bldg in burlington mall (SBC having been given to CDC as part of legal settlement). It was the burlington mall group that was shutdown by the head of POK in the mid-70s (with many of the people deciding to leave and go to work for DEC).

misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

other posts in this thread:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#35 At least two decades back, some gurus predicted that mainframes would disappear

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Mon, 18 Apr 2011 09:22:05 -0400
jmfbahciv <See.above@aol.com> writes:
I have problems with anything which requires energy I don't have. Thinking requires an enormous amount of energy. Where you have |--------------------------....-------| this amount of energy/day, I have |-| this amount. It's a symptom of CFS; a better name is CFIDS.

there has been folklore about people using only 5-10% of their brain ... including recent movie on what might happen using 100% of their brain.

there are some number of recent reports that evolution has attempted to optimize brain energy use ... since the brain is one of the major energy using organs. all sorts of brain activity is being described as accomplished with the minimum use of energy (which is attributed to be a survival characteristic) ... aka only using as much of the brain as necessary.

misc. recent posts:
https://www.garlic.com/~lynn/2010q.html#14 Compressing the OODA-Loop - Removing the D (and mayby even an O)
https://www.garlic.com/~lynn/2010q.html#61 Compressing the OODA-Loop - Removing the D (and maybe even an O)
https://www.garlic.com/~lynn/2011.html#7 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
https://www.garlic.com/~lynn/2011.html#39 The FreeWill instruction
https://www.garlic.com/~lynn/2011.html#78 subscripti ng

--
virtualization experience starting Jan1968, online at home since Mar1970

CPU utilization/forecasting

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: CPU utilization/forecasting
Newsgroups: bit.listserv.ibm-main
Date: 18 Apr 2011 06:36:19 -0700
dba@LISTS.DUDA.COM (David Andrews) writes:
Once upon a time I found it useful to condense a month's worth of RMF data into a single graph showing the average CPU utilization over the course of a day, plus-or-minus one standard deviation. That drop-bar chart made it easy to visualize two-thirds of our daily workload at a glance.

in the early days of commercial virtual machine online service bureaus (sort of the cloud computing of the 60s & 70s) ... there were reports showing the peaks & valleys of avg. daily online use ... and being able to extend use over the whole country ... allowing the peaks from the different timezones to offset the valleys in other timezones.

the science center had started accumulating all its (virtual machine) system activity from the 60s ... and established it as standard process for normal operation. By the mid-70s, the science center not only had several years of its own data ... but was also acquiring similar data from large number of internal datacenters. This was used for a lot of modeling and simulation work, along with workload & configuration profiling ... which eventually evolves into capacity planning.

one of the science center's models (implemented in APL) ... was made available (starting in the mid-70s) on the (internal online virtual machine) HONE systems (providing world-wide sales & marketing support) as the Performance Predictor. Sales people could collect customer workload & configuration profile and ask "what-if" questions (of the Performance Predictor) about changes to workloads and/or configuration.

misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts
re:
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear
also:
http://lnkd.in/mk79ZS

The 303x was pre-XA ... was quick&dirty mad rush to get something out as fast as possible after the death of FS ... with FS having killed off nearly all 370 activity ... viewing it as competition. 303x went on in parallel with "811" and 3081 ... see Sowa and Clemson references, 3081 was going to take 7-8yrs from start, there was nothing in the pipeline because FS had killed off all competition ... so something had to be done as fast as possible. FS killing off all (internal) competition is also attributable for giving clone processors foothold in the market.

FE had a bootstrap service protocol that starts with being able to scope components. TCMs were introduced with 3081 ... with components inside the TCM and no longer "scopeable". Service processor was introduced to address the FE process. The (primitive UC-based) service processor could be scoped & diagnosed ... and then the 3081 was built with all sorts of probes into the TCMs that were accessible by the service processor (FEs being able to use the service processor to diagnose the rest of the machine). However, UC-based service processor was really too primitive to do a whole lot.

The 3090 service processor (3092) started out as 4331 with FBA disk running a highly modified version of VM370 release 6 (with FEs being able to scope/diagnose the 4331 and then use the 3092 to diagnose the 3090 TCMs. By time 3090s started shipping, the "3092" had been upgraded to a pair of redundant 4361s (running highly modified version of VM370 release 6). It was this facility that was sophisticated enough to handle PR/SM and emerging LPAR (to compete against Amdahl's hypervisor). As an aside ... all the 3090 service processor screens were done in CMS IOS3270 ... mentioned in this "Old Geek" discussion:
http://lnkd.in/UkVnRg
about early mainframe tcp/ip support (also ran in ibm-main mainframe mailing list).

I was heavily involved with the 138/148 ... precursor to 4331/4341 ... and then with 4331/4341 ... as previously mentioned misc. old email about 43xx
https://www.garlic.com/~lynn/lhwemail.html#43xx

I had been con'ed into helping with parts of the 138/148 ... and then the endicott product manager (code name was virgil/tully) con'ed me into running around the world with him making presentations to the various country product forecasters. World Trade (non-US) forecasts tended to be taken at face value ... since the forecast went from the plant books and showed up on the country books for sale to customers. US forecasts had turned into whatever hdqtrs said was "strategic" (since machines went directly from the plant's books to the customer ... w/o ever showing on the books of the sales organization ... and therefor there was no downside for US sales to forecast whatever hdqtrs told them was strategic). There was eventually joke in the US that labeling something as "strategic" was method of pushing a product that customers weren't buying (since US "strategic" carried with it various kinds of sales incentives).

--
virtualization experience starting Jan1968, online at home since Mar1970

Massive Fraud, Common Crime, No Prosecutions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Apr, 2011
Subject: Massive Fraud, Common Crime, No Prosecutions
Blog: Facebook
Massive Fraud, Common Crime, No Prosecutions
http://www.phibetaiota.net/2011/04/massive-fraud-common-crime-no-prosecutions/

There had been some securitized mortgages (CDOs) in the S&L crisis with doctored supporting documents for fraud. In the late 90s, we were asked to look at what could be done for integrity/assurance of securitized mortgages (CDOs) supporting documents.

In the first of this century, loan originators found they could pay rating agencies for triple-A ratings and immediately unload every loan (without regard to quality or borrower's qualifications). Speculators found that no-down, no-documentation, 1% interest payment only ARMs could make 2000% ROI buying&flipping properties in parts of the country with 20%-30% inflation.

Buyers of triple-A rated toxic CDOs didn't care about supporting documents, they were buying purely based on the triple-A rating. Rating agencies no longer cared about supporting documentation because they were being payed to give triple-A rating. Supporting documentation just slowed down loan originator's process of issuing loans. Since nobody cared about supporting documentations, they became superfluous ... which also resulted in there no longer being issue about supporting documentation integrity/assurance.

Lending money used to be about making profit on the loan payments over the life of the loan. With triple-A rated toxic CDOs, for many, it became purely a matter of the fees & commissions on the transactions and doing as many as possible. There were reportedly $27T in triple-A rated toxic CDO transactions done during the bubble ... with trillions in fees & commissions disappearing into various pockets.

Possibly aggregate 15%-20% take on the $27T ($5.4T) as the various transactions wander through the infrastructure (starting with original real estate sale).

In the fall2008 congressional hearings into the rating agencies ... the issue was raised that the rating agencies might blackmail the gov. into not prosecuting with the threat of downgrading the govs' credit rating (an issue that is in the news today).

there was report that wall street tripled in size (as percent of GDP) during the bubble and NY state comptroller reported that wall street bonuses spiked over 400% during the bubble ... all being fed by the $27T in triple-A rated toxic CDO transactions.

recent posts mentioning the $27T:
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#80 Chinese and Indian Entrepreneurs Are Eating America's Lunch
https://www.garlic.com/~lynn/2011b.html#27 The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#43 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#45 Productivity And Bubbles
https://www.garlic.com/~lynn/2011c.html#46 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#23 The first personal computer (PC)
https://www.garlic.com/~lynn/2011e.html#7 I actually miss working at IBM
https://www.garlic.com/~lynn/2011e.html#36 On Protectionism
https://www.garlic.com/~lynn/2011e.html#48 On Protectionism
https://www.garlic.com/~lynn/2011e.html#60 In your opinon, what is the highest risk of financial fraud for a corporation ?
https://www.garlic.com/~lynn/2011e.html#74 The first personal computer (PC)

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Mon, 18 Apr 2011 14:21:54 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
Sounds like current CPUs. I'd be afraid to think of what might happen if we used 100% of our brains -- before they burned out from overload. I guess I don't have to worry, my wife says I'm closer to the 5% end.

re:
https://www.garlic.com/~lynn/2011f.html#40 The first personal computer (PC)

increase by factor of ten could require ten times the blood flow and oxygen uptake ... as well as getting rid of the additional generated heat; likely would require some serious augmentation to pump in the extra blood/oxygen and remove the extra heat.

serious overclockers are using liquid nitrogen to go from 2.8 to 5.5 (about double):
http://www.liquidnitrogenoverclocking.com/index.shtml
http://www.tomshardware.com/reviews/5-ghz-project,731.html

would take some serious redoing how the brain was layed out and augmented.

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts
re:
https://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear
also:
http://lnkd.in/mk79ZS

IBM had parallel bus&tag that was 4.5mbytes. Cray channel was standardized as HIPPI parallel at 100mbyte/sec (big push for standard by LANL). IBM tried to support HIPPI on 3090 but 3090 couldn't handle the I/O rate. They did an ugly hack that cut into the side of the expanded storage bus and HIPPI transfers were done with peek/poke paradigm to special addresses.

In parallel with LANL & HIPPI, LLNL had serial interface that was being pushed as standard called FCS ... 1gbit/sec dual-simplex (i.e. concurrent 1gbit/sec in each direction).

In parallel with LANL & LLNL, somebody from SLAC was working on dual-simplex standard ... SCI ... with specification for both asynchronous memory bus operation and as I/O operation

There had been fiber technology knocking around POK for quite some time ... Austin (& Rochester) took the specification ... tweaked it to be 10% faster and use high reliable, commodity drivers and it shipped as SLA (serial link a adapter) dual-simplex ... simultaneous transfer in both directions. POK eventually ships the slower & much more expensive as ESCON ... but limited to transferring in only one direction at a time (simulating existing parallel IBM channels). In part because ESCON was still simulating half-duplex ... it still had thruput issues because of end-to-end turn-around latencies.

We had been working with LANL, LLNL and SLAC in all three standards bodies. The Austin engineer that did SLA ... started working on faster 800mbit version ... and we convince him to instead join the FCS standards work (where he takes over responsibility for editing the FCS standards document). This is crica 1990. Eventually some mainframe channel engineers start attending the FCS meetings attempting to do extremely unnatural things to the standard ... which eventually morphs into FICON (I still have much of the FCS standard mailing list distribution in archive some place).

Part of the hype for large number of half-duplex mainframe channels was problem with the 3880 controller and 3090s. Data streaming and 3mbyte transfers (for 3380 disk) was introduced with 3880 controller. The 3880 was much slower than 3830 ... so had special hardware path for data transfers ... leaving slow processor for handling commands & control path. At some point the 3090 engineers realize that the slow 3880 controller would have significantly higher channel busy and they needed to have a whole lot of additional channels (to offset the high 3880 channel busy overhead). The extra channels resulted in requiring extra manufacturing TCM. There was joke that 3090 product was going to bill the 3880 group for the increase in manufacturing costs for the extra channels & TCM.

Base SLA, FCS, and SCI operation was dual-simplex with asynchronous operation (not end-to-end serialization). Commands went down one path ... and that (channel) path was busy only for the duration of the transfer. The "channels" didn't have the half-duplex serialization busy overhead for control operations (with controllers & devices). As a result, asynchronous dual-simplex needed capacity for the raw transfers (but not the enormous amount of excess capacity to handle the half-duplex end-to-end latency for control operations).

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 19 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
Blog: Mainframe Experts
re:
https://www.garlic.com/~lynn/2011f.html#45 At least two decades back, some gurus predicted that mainframes would disappear
also:
http://lnkd.in/mk79ZS

recent mainframe announcements includes support for out-of-order execution ... (announcement claims) contributing about 20% instruction throughput boost (this has been feature that has been seen in other processors for two decades or more).

the issue is that on a cache miss, the instruction stalls and waits for memory operation ... latency for the memory operation can be hundreds or even thousands of machine cycles. out-of-order operation allows following instructions to be executed asynchronously with the stalled instruction waiting for the cache miss memory operation (assuming high probability that they don't also have cache miss and/or dependent on the results from the stalled instruction).

the dual-simplex serial channels ... again from at least two decades ago ... did something similar ... but for I/O operations ... in high activity concurrent workload could have throughput improvements significantly higher (than the 20% for recent mainframe out-of-order execution ... as compared to the traditional mainframe half-duplex i/o paradigm)

note that a major reason for SSH in "811" architecture ("811" was mad rush to put together something after the failed Future System effort and supposedly comes from the Nov78 date on many of the documents) and introduced with 3081 ... was the significant I/O redrive serialization latency with operating system interrupt handlers. The issue was that there are multiple queued requests ... all managed by software ... the first queued operation was not started until after the interrupt handler finished processing the current active operation. This interrupt handling/redrive serialization latency was becoming an increasingly significant part of throughput.

SSH could have multiple queued requests (offloaded into hardware) ... and allow outboard processor to start the next request asynchronously with interrupt handling of the previous request (eliminating much of the I/O redrive latency). Dual-simplex serial channels tended to accomplish the same thing (in addition to enormously increasing utilization of the media bandwidth) ... but allowing command packets to be asynchronously transmitted down the outgoing path.

We had done something approx. similar to SSH for 2305 fixed-head disks with multiple exposures ... using the multiple exposures to queue up a lot of different requests ... which would execute asynchronously while processing interrupts from previous requests. This allowed achieving sustained throughput very close to theoretical maximum device transfer.

The san jose mainframe disk development (in bldg 14) had a variety of mainframes for testing development disks. These mainframes were running standalone with dedicated test time being scheduled 7x24. They had once tried MVS to do concurrent testing in operating system environment ... but found that with just a single "testcell", MVS had 15min. MTBF (requiring reboot). I offered to do an bulletproof IOS rewrite that would never fail and support multiple, concurrent, on-demand testing (significantly improving development productivity). misc. past posts getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

When "811" SSH was published ... I tried an absolutely optimized (production) interrupt-handler/IO-redrive pathlength (in software) to see how close I could come to SSH design (that used independent, outboard asynchronous processor). Turns out that SSH was largely justified based on extremely bloated operating system implementation with enormously long elapsed time between interrupt and I/O redrive.

One of the reasons that two decades ago, mainframes were being viewed as a plodding dinosaur ... already dead ... but the signal hadn't yet reached its brain ... was lots of vendors/products were doing asynchronous (dual-simplex) operations for latency masking/compensation. Base FCS standard two decades ago was asynchronous, dual-simplex 1gbit links ... being able to achieve nearly 2gbit aggregate thruput (concurrent operations in both directions). Something similar was seen in SCI standard ... with definitions for both asynchronous memory bus operation (between processor cache and memory) as well as I/O operations.

Even IBM had done something similar two decades ago in Hursley with Harrier/9333 ... which was basically SCSI disks ... with SCSI commands packetized and sent asynchronously down Harrier dual-simplex serial links. Harrier (for high concurrent workload environment) sustained significantly higher throughput than the same disks used in conventional half-duplex SCSI I/O operation.

I had tried to get Harrier to evolve into interoperability with FCS (being able to plug into FCS switches ... but operating at lower transfer rate) .... but instead it evolved into its own (incompatible) specification. Some of the mainframe channel engineers may have been involved ... since they were trying to co-op FCS and layer a bunch of unnatural things on top the base standard ... which evolves into FICON (interoperable harrier on same FCS switches with "FICON" may have made "FICON" look bad).

--
virtualization experience starting Jan1968, online at home since Mar1970

First 5.25in 1GB drive?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First 5.25in 1GB drive?
Newsgroups: alt.folklore.computers
Date: Tue, 19 Apr 2011 13:09:57 -0400
re:
https://www.garlic.com/~lynn/2011f.html#22 First 5.25in 1GB drive?
https://www.garlic.com/~lynn/2011f.html#26 First 5.25in 1GB drive?
https://www.garlic.com/~lynn/2011f.html#27 First 5.25in 1GB drive?

and some '91 history ... from long ago and far away

Date: 14 Feb 91 09:03:07 GMT
To: wheeler
Subject: Fujitsu Announces 2 GB, Ave. Positioning 11ms, 5.25in DASD

On February 14, 1991, Fujitsu announced and began OEM marketing four models of M2652 series DASD, a new high-capacity 2GB, 5.25in DASD family. This is the highest capacity available on a 5.25in devices from any vendor.

The average access time is 11 msec. The average latency is 5.6 msec (5,600 RPM). This was achieved by using thin film heads, thin film disks and a light-weight actuator. The data transfer rates are 4.758 MB/sec (IPI-2), 10 MB/sec (SCSI Synchronous, W/256KB buffer) and 3 MB/sec (SCSI Asynchronous).

The M2652 will be shipped from June 1991. Samples will be priced at 1,000 KYen. Fujitsu plans to sell 500,000 units of these four models (total) over three years.


          Fujitsu 5.25-Inch HDD Specifications

+-----------------------+-----------------------------+
|        Model          |         M2652 Series        |
|-----------------------+-----------------------------|
| Sample Price (Yen)    |         1,000 KYen          |
|   (dollar @ 130Y/$)   |           $7,692            |
| Interface             |        IPI-2, SCSI          |
|-----------------------+-----------------------------|
| Capacity              |            2 GB             |
| Data Rate             |        4.758 MB/sec         |
| - SCSI Synch(256KB BF)|          10  MB/sec         |
| - SCSI Asynch         |           3  MB/sec         |
| Avg Positioning Time  |           11 msec           |
| RPM                   |           5,400             |
| Latency               |           5.6 ms            |
|-----------------------+-----------------------------|
| Bytes per Track       |          52,864             |
| Cylinders             |           1,893             |
| No of Disks           |             12              |
| No of Heads  R/W + SR |           20 + 1            |
| Dimension(IPI, SCSI-D)|    146mm x 220mm x  83mm    |
|  - WxDxH (Others)     |    146mm x 203mm x  83mm    |
| Weight                |             4 Kg            |
| Power                 |          +12V, +5V          |
+-----------------------+-----------------------------+


... snip ... top of post, old email index

for random other piece of info ...
Date: 19 Mar 91 17:59:57 GMT
To: wheeler
Subject: PC OPERATING SYSTEMS, 1990

According to 3/15 NY Times "Business Day" (Source: Dataquest) 1990 unit shipments of personal computer operating systems in U.S. were as follows:


SYSTEM         QTY

MS-DOS     14,021,000 (75%?)
Windows     2,039,000
MacIntosh   1,946,000
UNIX          399,000
OS/2          300,000 (1.6%?)

... snip ... top of post, old email index

a little more disk:
Date: 07 Aug 91 15:33:44 GMT
To: wheeler
Subject: Toshiba 3.5in 1-Gigabyte drive (Price=$2000)

"Toshiba begins US Output of one-Gigabyte Disk Drive: (Price = $2000)

"Toshiba Corp. said it has begun US Production of the first 3.5in disk drives with memory capacity of one gigabyte, or roughly one billion characters of information.


... snip ... top of post, old email index

and then there is
Date: 03 Sep 91 07:38:55 GMT
To: wheeler
Subject: Glass coated ceramic disks from KYOCERA, Japan

KYOCERA DEVELOPS GLASS COATED CERAMIC DISKS
-------------------------------------------

According to Nikkei Sangyo Shimbun (one of industrial newspaper published by NIKKEI), 30th of August, Kyocera announced glass coated ceramic substrates for hard disks.

This new substrate makes possible to record 60MB on 2.5 inch disk, since less roughness than aluminum/glass substrates are now being marketed. Sumitomo Tokushu Kinzoku Co. (Sumitomo Special Metal Co.), and others are now developing the ceramic substrates also, however Kyocera is the first manufacturer who made the sample disks ready to ship.

Kyocera will ship these ceramic disks to the HDD manufacturer of U.S. for evaluation, mass production will be started based upon the evaluation results. (|Yuki Note| No date reported when scheduled to start)

The construction of substrate is coated 25 micron thickness of glass on both surfaces, and total thickness of disk is 0.635 mm (0.025 inches).

The mechanical strength of Alumina-Ceramic is four(4) times of Aluminum or Glass disks, and easy to be polished. This unbreakable disks would also contribute to reduce the defective at the assembly processes. Ready to ship samples at 1.8 and 3.5 inches size. The price of sample is 2,000 yen to 3,000 yen ($14.81 to $22.22 US, 135 yen/Dollar), and it'll be approx. 1,000 yen ($7.41 for piece) beyond starting of mass-production.

With Aluminum substrates existing, 0.1 micron flying height and 40 MB of storage capacity at 2.5 inches disk, however with this new substrate 0.05 micron flying height would be possible. At the present, Aluminum substrate is the majority of HDD. And, Asahi Glass Co., and HOYA are producing the glass substrates.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: 19 Apr, 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
also posts to (linkedin) z/VM
http://lnkd.in/MN4vEx

aka "811" was name for architecture enhancements to 370 ... started appearing with 3081:

Date: 07/16/80 19:42:07
From: wheeler

well, where do things stand ??? there has been no feedback on friday meeting and/or current status. I did talk to XXXXX and he is putting me on the distribution list for VM/811 specs. YYYYY (somebody in POK) read all four or five papers I sent and wanted them classified "IBM Confidential". I guess my management didn't understand initially about such things and thot they had to go along. There is nothing in them that warrants such a classification (& I know of several papers of the same nature that were published w/o any classification). Hopefully they will get approval in time for Atlanta Share meeting and can be distributed at that time. CMS file system committee wants to see the documentation on PAM. Baybunch had discussion and presentations about VMCF/IUCV and service virtual machines. Longwait problem came up, fix that ZZZZZ did to DMKDSP to check for outstanding SENDR requests before setting long wait isn't going out. FE is closing APAR as things currently work the way they are designed to. Tymshare also made presentation on simple modification to CMS which included simple multi-tasking supervisor plus high level VMCF support (high enuf so implementation could be VMCF or IUCV or ... ). Allows you to write all sorts of service virtual machine functions at very high level (even in PLI) and be able to run it multi-task. Code amounts to 20-50 lines of modification to existing CMS nucleus plus new resident module or two (total lines somewhere between 500 and 1500). Multi-tasking is general enuf that can be used even if you aren't using VMCF functions.


... snip ... top of post, old email index

and:

Date: 12/22/80 17:54
To: wheeler

Just send you a taskforce report covering CMS for the 811 hardware. Please do not discuss it with anyone outside Research division or CSC until after we take it to Pok. (who asked for it). We plan to go up on the 23 of December.

Please send me any comments you might have.


... snip ... top of post, old email index

past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#33 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#52 A brief history of CMS/XA, part 1

--
virtualization experience starting Jan1968, online at home since Mar1970

Dyadic vs AP: Was "CPU utilization/forecasting"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dyadic vs AP: Was "CPU utilization/forecasting"
Newsgroups: bit.listserv.ibm-main
Date: Wed, 20 Apr 2011 10:04:53 -0400
martin_packer@UK.IBM.COM (Martin Packer) writes:
Ron, care to remind us of the modelling difference? It's been a while. :-)

360 & 370 dual-processors shared memory but each processors had its own dedicated channels ... and the configuration could be split and run as two separate processors.

370AP ... was a two processor configuration where only one of the processors had channels ... the second processor purely did compute bound work. it was less expensive and could be applicable to more compute intensive work. it was also applicable in large loosely-coupled environment when running out of controler interfaces for all the channels ... aka with four channel dasd controller with string-switch (for each disk connected to two controllers) ... giving 8 channel paths to each disk ... it would be possible to have eight two-processor complexes (for 16 processors total).

DYADIC was term introduced with 3081 ... where it wasn't possible to split the configuration and run as two separate independent processors (wanted to draw distinction between past 360/370 multiprocessor that could be split and run independently and the 3081 which couldn't be split). 3081 had both processors being able to address all channels and also introduced 31-bit virtual addressing.

trivia ... 360/67 had 32-bit virtual addressing, all processors could address all channels *AND* configuration could be split into independent running single processors. 360/67 was desgined for four-processor configuration, but I know of only a couple three-processor configuration that were actually built (and no four processor configurations) ... all the rest multiprocessor configurations were simply two-processors.

other 3081 trivia ... 370 (& 3081) dual-processor slowed machine cycle down by ten percent to help with multiprocessor processor cache interaction ... so a two-processor machine started out at only 1.8 times a single processor machine. Multiprocessor software and actual multiprocessor cache interactions tended to add additional overhead so that dual-processor tended to have 1.4-1.5 times the throughput of single processor.

3081 originally never intended to have single processor version ... but largely because ACP/TPF didn't have multi-processor support, there was eventually a 3083 introduced. The easiest would have been to remove 2nd processor from the 3081 box ... however, processor0 was at the top of the box and the 2nd processor1 was in the middle of the box ... which would have left the box dangerously top heavy.

eventually 3083 was introduced with single processor ... it was possible to turn-off the ten percent machine cycle slowdown (done for multiprocessor cache interaction) ... and eventually there was a special microcode load tuned for the ACP/TPF workloads that tended to be more I/O intensive.

in the late 70s, the consolidated internal online US HONE operation (US HONE and the various HONE clones providied online world-wide sales & marketing support) was the largest single-system operation in the world. It was large loosely-coupled operation with "AP" multiprocessors ... most of the sales&marketing applications were implemented in APL and the workload was extremely compute intensive. I provided them with their initial multiprocessor support ... highly optimized kernel multiprocessor pathlengths and games played to improve cache hit locality ... could get slightly better than twice single processor (i.e. cache games offset the machine running at only 1.8times single processor and the optimized multiprocessor pathlengths). misc. past posts mentioning multiprocessor support (&/or compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

as mentioined in previous post
https://www.garlic.com/~lynn/2011f.html#41 CPU utilization/forecasting

the science center had done a lot of the early work in performance monitoring, reporting, simulation, modeling, workload&configuration profiling ... that evolves into capacity planning. misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

One of the APL models was packaged in the mid-70s as the performance predictor on HONE ... so that sales&marketing could take customer workload&configuration specification and ask "what-if" questions about workload &/or configuration changes. another version of the "model" was modified and used to decide (online) workload balancing across the loosely-coupled configuration (which processor complex would new logon be directed to, part of single-system operation). misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Dyadic vs AP: Was "CPU utilization/forecasting"

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Dyadic vs AP: Was "CPU utilization/forecasting"
Newsgroups: bit.listserv.ibm-main
Date: 20 Apr 2011 12:09:13 -0700
lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
DYADIC was term introduced with 3081 ... where it wasn't possible to split the configuration and run as two separate independent processors (wanted to draw distinction between past 360/370 multiprocessor that could be split and run independently and the 3081 which couldn't be split). 3081 had both processors being able to address all channels and also introducted 31-bit virtual addressing.

re:
https://www.garlic.com/~lynn/2011f.html#41 CPU utilization/forecasting
https://www.garlic.com/~lynn/2011f.html#49 Dyadic vs AP: Was "CPU utilization/forecasting"

during the FS period ... there was lots of internal politics going on and 370 product efforts were shutdown (as being possibly competitive). When FS died, there was all sort of mad rush to get things back into the 370 product pipelines (the distraction of FS is claimed to have also allowed clone processors to gain market foothold). ... this discusses some of the FS effort, the internal politics and the dark shadow that the FS failure cast over the corporation for decades
https://people.computing.clemson.edu/~mark/fs.html

part of the mad rush was to get out q&d 303x. The integrated channel microcode from the 158 was split off into 303x "channel director" (158 engine with only integrated channel microcode and no 370 microcode). A 3031 was a 158 engine (& new panels) with just the 370 microcode (and no integrated channel microcode) and a 2nd 158 engine (channel director) with just the integrated channel microcode. A 3032 was a 168-3 with new panels and 303x channel director(s) aka 158 engine. A 3033 started out being 168-3 logic remapped to 20% faster chips and 303x channel director(s) ... before the 3033 shipped there was some redoing logic (the chips were 20% faster but also had ten times as much circuits per chip ... initially went unused) ... which got 3033 up to about 50% faster than 168-3.

In parallel with 303x ... there was work on "811" architecture (supposedly named for nov78 date on lots of the documents) and the 3081 (a FS machine with just 370 emulation microcode). Some of this is discussed in this internal memo reference from the period:
http://www.jfsowa.com/computer/memo125.htm

A number of 3033up was starting to feel significantly memory/storage constrained being limited to maximum of 16mbyte real storage. It was much worse for 3033mp since they were also limited to same 16mbyte real storage constriant.

A hack was done for 3033 to allow more real storage ... even tho instruction addressing was limited to 24bit/16mbyte addressing. The 370 page table entry had two unused bits ... and the 3033 hack was to re-assign the two unused bits to prepend them to the page number, allowing specifying up to 2**14 4kbyte pages (aka 26bit/64mbyte). Real & virtual instruction addressing was still limited to 16mbytes ... but a page table entry could translate that up to 64mbytes. I/O then was done with IDALs ... which already had 31bit field.

MVS was (also) having severe problems at larger 3033 installations. Transition from OS/VS2 svs to OS/VS2 mvs ... involved mapping 8mbyte MVS kernel image into every 16mbyte application virtual address space. In order to support subsystems that were now (also) in separate virtual address working with application address space ... there was "common segment" in every virtual address space, that started out as 1mbyte (applications being able to place parameters in CSA and use pointer-passing API in subsystem call). For large 3033 MVS instalaltions, CSAs were hitting 4-5mbytes and threatening to grow to 5-6mbytes (leaving only 2mbytes for application). An internal shop that was a large multiple machine MVS operation had 7mbyte fortran chip design application. The MVS operation was carefully configured to keep CSA to 1mbyte ... and constant ongoing activity keeping the chip design Fortran application from exceeding 7mbyte. They were being faced with having to convert all their machines to vm/cms ... since CMS could allow the application to have nearly all of the 16mbyte virtual address space.

Early 3081Ds were shipped w/o the "811" extensions (vanilla 370 w/o 370-xa) and were supposedly slightly faster than 3033 ... however, there was a number of benchmarks that had 3081Ds 20% slower than 3033. 3081K came out with double the processor cache size ... supposedly 50% faster than 3033 ... but some benchmarks coming in only 5% faster than 3033.

The internal memo (mentioned above) goes into the enormous amount circuits-hardware in 3081 compared to its performance (especially when stacked up against clone processors).

tieing two 3081s into a 4-way 3084 really drove up the multiprocessor cache interference (invalidate signals coming from three other caches rather than one other cache). Both VM/HPO and MVS had kernel "cache sensitivity" work that involved carefully aligning kernel control areas & storage allocation on cache line boundaries (and making them multiple of cache lines). the issue was to prevent two different storage areas (which might be concurrently used by different processors) from occupying different parts of the same cache line (resulting in cache line thrashing being moved repeatedly between different caches). This change supposedly gained greater than 5% aggregate system throughput.

--
virtualization experience starting Jan1968, online at home since Mar1970

US HONE Datacenter consolidation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: US HONE Datacenter consolidation
Newsgroups: alt.folklore.computers
Date: Thu, 21 Apr 2011 09:14:01 -0400
The US HONE datacenters were consolidated in Palo Alto in the mid-70s. At the time, it was open land next door. Now the land has bldg. where the president visited yesterday. The old US HONE datacenter bldg. now has a different occupant.

HONE had originated in the wake of the 23Jun69 unbundling announcements to give the young system engineers in branch offices a way of getting their "hands-on" operating practice (in virtual machines). Originally it was some number of 360/67 datacenters running (virtual machine) cp67.

It morphed into providing online APL-based applications supporting sales&marketing, first with cp67 cms\apl ... then transitioning to vm370 apl\cms (and the se "hands-on" activity dwindled away to nothing).

misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past posts mentioning 23jun69 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

Are Americans serious about dealing with money laundering and the drug cartels?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 23 Apr, 2011
Subject: Are Americans serious about dealing with money laundering and the drug cartels?
Blog: Financial Crime Risk, Fraud and Security
How a big US bank laundered billions from Mexico's murderous drug gangs
http://www.guardian.co.uk/world/2011/apr/03/us-bank-mexico-drug-gangs

from above:
As the violence spread, billions of dollars of cartel cash began to seep into the global financial system. But a special investigation by the Observer reveals how the increasingly frantic warnings

... snip ...

post from early jan. in (financial crime risk, fraud and security) "What do you think about fraud prevention in governments?"
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?

part of the reference was that with the gov. bending over backwards to do everything possible to keep the too-big-to-fail institutions in business ... that with a little thing like drug cartel money laundering ... the gov. has been asking them to please stop.

Too Big to Jail - How Big Banks Are Turning Mexico Into Colombia
http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html
Banks Financing Mexico Gangs Admitted in Wells Fargo Deal
http://www.bloomberg.com/news/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal.html
Wall Street Is Laundering Drug Money And Getting Away With It
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html?show_comment_id=53702542
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/06/28/bloomberg1376-L4QPS90UQVI901-6UNA840IM91QJGPBLBFL79TRP1.DTL
How banks aided drug traffic
http://www.charlotteobserver.com/2010/07/04/1542567/how-banks-aided-drug-traffic.html
The Banksters Laundered Mexican Cartel Drug Money
http://www.economicpopulist.org/content/banksters-laundered-mexican-cartel-drug-money
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://www.globalresearch.ca/index.php?context=va&aid=20210
Wall Street Is Laundering Drug Money and Getting Away with It
http://www.alternet.org/economy/147564/wall_street_is_laundering_drug_money_and_getting_away_with_it/
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://dandelionsalad.wordpress.com/2010/07/23/money-laundering-and-the-global-drug-trade-are-fueled-by-the-capitalist-elites-by-tom-burghardt/
Global Illicit Drugs Trade and the Financial Elite
http://www.pacificfreepress.com/news/1/6650-global-illicit-drugs-trade-and-the-financial-elite.html
Wall Street Is Laundering Drug Money And Getting Away With It
http://institute.ourfuture.org/blog-entry/2010072814/megabanks-are-laundering-drug-money-and-getting-away-it

Money laundering does carry a bit heavier fine ... including shutting down the financial institution and putting the executives in jail. However, when its the too-big-to-fail institutions are involved ... they seem reluctant to impose such measures.

There have been claims that china's measures (after throwing off the British yoke of imposed drug selling) are (in the long run) a lot more humane (and effective) method of dealing with the problem.

A counter scenario (at least with respect to SEC) was the congressional testimony by the person that tried for a decade to get the SEC to do something about Madoff ... or the GAO reports reviewing public company financial filing showing big uptick in fraudulent filings ... even after SOX (which theoretically had significant increased penalties, including executives going to jail) ... recent quote off the web: Enron was a dry run and it worked so well it has become institutionalized.

Corrupt Bank Oversight Is Creating New Immoral Hazard
http://blogs.forbes.com/neilweinberg/2011/04/14/corrupt-bank-oversight-is-creating-new-immoral-hazard/
In Financial Crisis, No Prosecutions of Top Figures
http://www.nytimes.com/2011/04/14/business/14prosecute.html?_r=2&hp

related to "corrupt bank oversight":

The Real Housewives of Wall Street; Why is the Federal Reserve forking over $220 million in bailout money to the wives of two Morgan Stanley bigwigs?
http://www.rollingstone.com/politics/news/the-real-housewives-of-wall-street-look-whos-cashing-in-on-the-bailout-20110411?page=1

In the case of GAO review of public company financial filings ... and the testimony of the person that tried for a decade to get the SEC to do something about Madoff ... it was lack of regulation enforcement. The person that had tried for a decade to get SEC to do something about Madoff, also mentioned something about SEC was all lawyers and nearly nobody with forensic accounting experience (this was also somewhat Cramer's comment about there was little risk in illegal naked short selling because wasn't anybody at the SEC that understood). In one of the recent news references upstream ... it may also have been pressure by members of congress (who possibly would have been implicated).

However, there is also some amount of deregulation (by congress) of some number of things.

Phil Gramm's Enron Favor
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

from above:
A few days after she got the ball rolling on the exemption, Wendy Gramm resigned from the commission. Enron soon appointed her to its board of directors, where she served on the audit committee, which oversees the inner financial workings of the corporation. For this, the company paid her between $915,000 and $1.85 million in stocks and dividends, as much as $50,000 in annual salary, and $176,000 in attendance fees,

... snip ...

People to Blame for the Financial Crisis; Phil Gramm
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

from above:
He played a leading role in writing and pushing through Congress the 1999 repeal of the Depression-era Glass-Steagall Act, which separated commercial banks from Wall Street. He also inserted a key provision into the 2000 Commodity Futures Modernization Act that exempted over-the-counter derivatives like credit-default swaps from regulation by the Commodity Futures Trading Commission. Credit-default swaps took down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

Gramm and the 'Enron Loophole'
http://www.nytimes.com/2008/11/17/business/17grammside.html

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and Mr. Gramm's wife, Wendy, served on the Enron board, which she joined after stepping down as chairwoman of the Commodity Futures Trading Commission.

... snip ...

Greenspan Slept as Off-Books Debt Escaped Scrutiny
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of the Commodity Futures Trading Commission, to study regulating over-the-counter derivatives. In 2000, Congress passed a law keeping them unregulated.

... snip ...

Born must have been fairly quickly replaced by Gramm's wife, before she then left to join Enron (and the Enron audit committee) Gramm's wife apparently was put in as Born's replacement as a temporary stop-gap until Gramm got law passed that exempted regulation.

recent quote seen on the web: Enron was a dry run and it worked so well it has become institutionalized

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Sat, 23 Apr 2011 10:00:26 -0400
hda <agent33@xs4all.nl_invalid> writes:
"Clattering" may be all about launching unwanted applications or multiple instances of the same application or coordination problems like with "Double Click".

My observation of older people is they have coordination problems with positioning, executing Point & Click to select, then "Double Click" to launch. Also they expect immediate visual response ( < 200 ms ) to their actuation while some applications do take up to 10 s in response time (showing animated pointer icon).


in the late 70s, we got a couple hacks to 3277 terminal.

nominal mainframe 327x terminals were half-duplex and had annoying habit of locking the keyboard if you attempted to press a key at the same time the system was trying to write to the screen (which then required stopping and hitting reset to unlock the keyboard).

there was a small fifo box built ... unplug the keyboard from the head plug the fifo box into the head, and then plug the keyboard into the fifo box ... and the problem was eliminated.

the repeat function (like holding cursor movement down) was also extremely slow ... both the delay before starting repeat and the repeat rate. open up the keyboard and solder a couple resisters ... choice of value would select delay interval before starting repeat and repeat rate.

the only problem was that .1/.1 sec. value ... would get ahead of the screen update ... using cursor positioning ... say holding down cursor forward key ... the cursor would appear to coast for some distance after releasing. It took a little bit of practice to get use the timing of releasing held key ... and have the cursor stop at the correct positon.

then for the next generaiton 3274/3278 ... they moved quite a bit of the electronics from the head back into the 3274 controller ... eliminating any ability to do (human factor) hacks on the terminal. complaining to the plant ... and they eventually came back that the 3278 wasn't designed for interactive computing ... it was designed for data entry (aka fancy keypunch).

moving all the electronics back to the controller drove up the chatter over the coax between controller & terminal ... as well as latency ... terminal response for 3278 was significantly slower than 3277 (over and above any system response). later with IBM/PC and terminal emulation ... upload/download rate for emuulated 3277 was three times faster than emulated 3278 (because of difference in coax chatter and latency). misc. past posts mentioning terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#terminal

old post with some (real) 3277/3278 comparisons
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

and whether or not it was possible to achieve .1 second response.

recent mainframe related postings on the subject:
https://www.garlic.com/~lynn/2011d.html#53 3270 Terminal
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011e.html#94 coax (3174) throughput
https://www.garlic.com/~lynn/2011f.html#0 coax (3174) throughput
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear

--
virtualization experience starting Jan1968, online at home since Mar1970

At least two decades back, some gurus predicted that mainframes would disappear

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 23 Apr, 2011
Subject: At least two decades back, some gurus predicted that mainframes would disappear
Blog: Greater IBM
re:
http://lnkd.in/mk79ZS

... but so many died off that they've come close to becoming endangered species. at the height of the genocide there was internal references would the last person to leave POK, please turn off the lights (20yrs ago was when the company went into the red)

It was only a few years prior to going into the red ... that top executives were saying the "booming" mainframe business was going to double company revenue from $60B/yr to $120B/yr ... and they had a huge bldg. program to double (mainframe) manufacturing capacity ... however, the handwriting was already on the wall (disparity between the massive internal bldg program and the direction things were heading)

also see upthread reference to senior disk engineer at internal communication group world-wide annual conference (towards the end of the manufacturing "doubling" bldg program) and opening the talk with the statement that the communication group was going to be responsible for the demise of the disk division (because of the stranglehold that the communication group had on the datacenter). this happened just before the company was heading into the red

past posts in thread:
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#17 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#35 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#37 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#45 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011f.html#46 At least two decades back, some gurus predicted that mainframes would disappear

--
virtualization experience starting Jan1968, online at home since Mar1970

Are Americans serious about dealing with money laundering and the drug cartels?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 23 Apr, 2011
Subject: Are Americans serious about dealing with money laundering and the drug cartels?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011f.html#54 Are Americans serious about dealing with money laundering and the drug cartels?

double check who became ceo&chairman (of Citi) and brought in a lot of his people ... independent of what the company was called ... something similar happened to BofA (jokes about neither original Citi nor original BofA now exist).

The rhetoric on the floor of congress regarding GLBA was that its primary purpose was that institutions that were already banks got to remain banks ... but if you weren't already a bank, you didn't get to become a bank (aka bank modernization act) ... specifically calling out walmart and m'soft.

walmart then wanted to get one of the existing ILC charters ... similar to what amex got and a number of car companies (for making car loans). walmart's purpose was to become its own "acquiring" institution (to save that part of card acquiring transaction fees). Note that walmart accounts for 25-30% of all retail store transactions ... which would have been a big hit to walmart's acquiring institution (one of the too-big-to-fail operations) ... they managed to rally the community banks to lobby congress against walmart obtaining ILC (somehow that major hit to revenue of a too-big-to-fail institution would impact community banks).

roll forward to the bailouts ... majority coming from federal reserve behind the scenes (some 20+k pages forced to recently release by court order) ... but only for depository institutions with bank charters. A few of the wall street too-big-to-fail institutions didn't have bank charters already ... so federal reserve just gave them bank charters (which should have been in violation of glba).

we had been tangentially involved in the cal. state data breach notification legislation ... having been brought in to help wordsmith the cal. state electronic signature legislation. several of the participants in electronic signature were also heavily into privacy and consumer issues. they had done large, in-depth consumer surveys and found #1 issue was identity theft ... namely account fraud because of data breaches. At the time little or nothing was being done about the breaches (the resulting fraud was against the consumers and not the institutions having the breaches). There was some hope that the resulting publicity from the breach notifications would motivate institutions to start doing something about the fraud/breaches.

these organizations were also in the process of doing a cal. state personal information "opt-in" legislation (i.e. institutions can only use and share personal information when the person explicitly authorizes it). A federal pre-emption provision was also thrown into GLBA for "opt-out" ... i.e. institutions can use and share personal information as long as the person doesn't call up and complain.

A few years ago, I was at a national annual privacy conference in WASH DC which had a panel discussions with the FTC commissioners. somebody got up in the back of the room and asked if any of them were going to do anything about GLBA "opt-out". He said he was involved in operations of call centers used by many of the financial institutions and "knew" that the people answering the "opt-out" phone lines weren't provided any means of recording who called (and wanted to "opt-out" of privacy sharing, aka "opt-out" you need recorded proof that you objected ... however, in "opt-in" they need recorded proof you agreed). The "opt-out/out-in" issue has recently also come up with regard to some of the social networking sites (an instance of both regulatory "change" as well as failing to enforce regulation)

--
virtualization experience starting Jan1968, online at home since Mar1970

Drum Memory with small Core Memory?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Drum Memory with small Core Memory?
Newsgroups: alt.folklore.computers, comp.arch
Date: Sat, 23 Apr 2011 15:01:46 -0400
"Dave Wade" <dave.g4ugm@gmail.com> writes:
Many computers had a paging drum, but as others have said reading and writing core takes a lot of electronics. Take a look at this:-
http://history.cs.ncl.ac.uk/anniversaries/40th/images/ibm360_672/21.html


360/67 was supposedly for tss/360 ... but tss/360 had lots of problems. as a result there were several other systems that utilized the 360/67 virtual memory ... orvyle at stanford, mts at michigan, cp67 at science center ... misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

science center ... had started out by doing hardware modifications to 360/40 to support virtual memory (they had originally tried to get 360/50 ... but they were in scarce supply because so many were going to FAA for ATC system) ... and did (virtual machine) cp40. when 360/67 became available, they replace the 360/40 and cp40 morphed into cp67.

360/67 came with 2301 "paging drum" ... 4mbyte capacity. it was similar to the 2303 drum ... but read/wrote four heads in parallel ... so had four times the transfer rate.

three people came out to univ in jan68 to install cp67 (our 360/67 ran a little tss/360 on weekends ... but rest of the time operated with os/360 ... not making use of virtual memory). they used tss/360 drum format ... which had nine 4kbyte pages across two tracks (one of the pages spanned the end of one track and the start of the next). however, the i/o queue was managed FIFO with single request at a time ... 2301 peaked at 80 page transfers/sec. I redid the i/o stuff ... ordered seek queuing (for moveable arm disk) and ordered chained requests (chaining queued requests on same position ... applicable to queued req for same disk cyl and/or whole drum). The change resulted in nearly doubling effective disk active and "graceful degradation" as load increased ... and allowed 2301 to peak at nearly 300 4k transfers/sec.

univ. originally had 512kbyte "core" 360/67 ... but it got upgraded to 768kbyte 360/67 fairly early because tss/360 was becoming so bloated (and then they still didn't use tss/360).

there was tss/360 benchmark involving single processor 360/67 with 1mbyte memory and a two-processor 360/67 with 2mbyte of memory where the 2-processor version had 3.8 times the throughput as the 1-processor. Both benchmarks were still quite dismall ... but somebody in the TSS camp tried to hype pixie dust that tss/360 could magically increase thruput (nearly four times the thruput with only twice the resources). The actualy situation was that tss/360 kernel real storage requirements were so bloated that in 1mbyte of real memory ,,, there was very little real memory left for applications. In 2-processor, 2mbyte configuration, kernel real storage didn't double ... so there was possibly four times the real storage left for applications resulting in 3.8 times the thruput (because of significantly reduction in real storage contention and page thrashing).

The SE at the univ. responsible for tss/360 and I did a benchmark script ... fortran source, input/edit, compile and execute the same program (with emulated "think" & human time). I did it for cms ... and with 35 simulated cms users running the script got better response and thruput than he did running the same script with just four TSS/360 users (same hardware and configuration)

misc. recent posts mentioning 2301 (and/or its follow-on, the 2305 fixed head disk):
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#54 Downloading PoOps?
https://www.garlic.com/~lynn/2011e.html#75 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011f.html#46 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

--
virtualization experience starting Jan1968, online at home since Mar1970

Are Tablets a Passing Fad?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are Tablets a Passing Fad?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Apr 2011 09:29:07 -0400
Morten Reistad <first@last.name> writes:
A plethora of gadgets, doing one or a few things well.

Not a desktop replacement.

And Microsoft can safely milk their product line for another decade before hitting serious resistance,


at '96 MDC held at moscone ... behind the scenes the m'soft people were saying it was turning point ... up until that point, people were always buying the latest version because they needed the new function. the claim in '96 was that 99% people already had 95% of the function they needed/used. From then on, new version uptake was going to be because of momentum and/or change-over to the '60s motivation to buy a new car every year ... because that was the thing to do (not because it was something needed).

m'soft revenue was pegged to everybody always buying the new version (and some growth because worldwide market hadn't completely saturated) ... if that changes to every other new version ... revenue could be cut in half ... but it doesn't necessarily/yet mean that majority of the people have stopped using m'soft software.

in the late 90s, it was motivation to play with fee per use revenue (somewhat like oldtime mainframe leasing model or oldtime commercial timesharing) ... which has now being associated with "cloud computing".

some of the tablet hype ... is with relatively small market penetration ... any growth is seen as significant growth ... and some of the folks that are constantly buying new toys ... have something else to buy (besides or instead of the latest PC). Lots of hype is 1st & 2nd order (change) in sales numbers.

--
virtualization experience starting Jan1968, online at home since Mar1970

Are Tablets a Passing Fad?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are Tablets a Passing Fad?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Apr 2011 11:43:45 -0400
Stan Barr <plan.b@dsl.pipex.com> writes:
I've been trying other window managers on a machine that normally runs Gnome - no problems everything still works, even Gnome apps if you leave Gnome running. Xubuntu uses XFCE rather than Gnome/KDE for a lighter-weight installation, and there are other light-weight distributions.

I've been using XFCE for a number of years on Fedora ... I had used KDE but I found it getting more & more unusable. On F14, XFCE, KDE, and GNOME seem to co-exist w/o problems (I can even get both gnome&kde systemsettings/controlcenter running under XFCE). On F15, I've run into problem with KDE co-existing ... but since I rarely use it ... it isn't worth diagnosing.

--
virtualization experience starting Jan1968, online at home since Mar1970

Drum Memory with small Core Memory?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Drum Memory with small Core Memory?
Newsgroups: alt.folklore.computers, comp.arch
Date: Mon, 25 Apr 2011 16:41:11 -0400
John Levine <johnl@iecc.com> writes:
Swapping, then. When you had to swap a whole job, the speed of a drum rather than a disk made a big difference.

re:
https://www.garlic.com/~lynn/2011f.html#56 Drum Memory with small Core Memory?

transfer rate being approx. equal ... then fixed head drum/disk makes a lot more difference for demand page operations ... since there was access latency for possibly large number of operations. if the swap transfers were set up appropriately then there was just a single access latency for lengthy transfer.

on mainframe going from 3330 to 3380 ... there was only modest improvement in access latency but a factor of ten times change in transfer rate. "big pages" were created in MVS & VM to leverage the change ... where on outbound ... a full track of (10) pages were collected for the write ... and a page fault for any of the ten pages would transfer all ten pages in one operation.

big pages tended to increase real storage requirements and inflate working sets ... at a trade-off of drastically reduced number of separate access (sort of part way between "small" demand paging and all out swapping).

the issue was that the early implementations tended to totally trash the page replacement accuracy ... degrading the trade-off benefits.

old email about trying to correct a whole bunch of implementations problems in "big page" implementation (that had existed for several years)
https://www.garlic.com/~lynn/2011e.html#email870320
in this post
https://www.garlic.com/~lynn/2011e.html#27 Multiple Virtual Memory

the above email mentions the term "SWAP" ... which actually is referring to "big page" implementation. the reference to "STC drum" is "electronic" device simulating fixed-head device with no rotational latency (where defined for "big pages", system ran at 70% utilization, but running w/o "big pages", system ran at 100% utilization ... effectively all higher thruput ... since paging operations had very small pathlength). above also has long list of past posts mentioning "big pages"

past posts mentioning page replacement work
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
virtualization experience starting Jan1968, online at home since Mar1970

Dyadic vs AP: Was "CPU utilization/forecasting"

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Dyadic vs AP: Was "CPU utilization/forecasting"
Newsgroups: bit.listserv.ibm-main
Date: 25 Apr 2011 14:50:31 -0700
In <005a01cbff6c$b63dded0$22b99c70$@hawkins1960@sbcglobal.net>, on
04/20/2011 at 08:07 AM, Ron Hawkins <ron.hawkins1960@SBCGLOBAL.NET> said:
I remember spending some time playing with CPU affinity trying to keep the CPU bound jobs away from the AP

re:
https://www.garlic.com/~lynn/2011f.html#49 Dyadic vs AP: Was "CPU utilization/forecasting"
https://www.garlic.com/~lynn/2011f.html#50 Dyadic vs AP: Was "CPU utilization/forecasting"

360&370 had two-processor multiprocessor shared memory and although had dedicated channels ... tended to try and simulate shared channels by trying to configure the same channel numbers on the two processors so they connected to same controllers (at same addresses, for controllers that supported multiple channel attachments) ... allowing I/O to be done to the same controller/device by both processors.

370-APs only had one of the processors with channels ... the other processor was purely for dedicated execution. I/O requests that originated on the attahced processor (w/o channels ... or in multiprocessor for device only connected to channel on the other processor) ... resulted in internal kernel operation that handed off the i/o request to processor with the appropriately connected channel.

one of the issues in cache machines ... was that high interrupt rates tended to have very deterious effect on cache-hit ratios (translates to effective MIP rate) ... where cache entries for the running application got replaced with cache entries for interrupt & device i/o handling ... and then possibly replaced again when switching back to the running application.

A gimick I hased in the early/mid 70s for cache machine ... was when observed I/O interrupt rates exceeding threshold ... started running disabled for I/O interrupts ... but with periodic timer interrupt. At the timer interrupt, all pending interrupts would be "drained" under software control (using SSM to enable for i/o interrupts). The increased delay in taking the i/o interrupt was more than offset by both increased cache hit ratio (aka MIP rate) of the application running w/o interrupts, and then increased cache hit ratio (aka MIP rate) of effectively batch processing multiple I/O interrupts.

For AP support, I also had a gimick that tended to keep the CPU intensive operations on the processor w/o channels (sort of natural CPU affinity). two processor 370 cache machine operation slowed down the processor machine cycle by 10% (for multiprocessor operation to take into account cross-cache communication) ... resulted in two-processor having base hardware of 1.8 times single processor ... multiprocessor software overhead then tended to result in multiprocssor having 1.4-1.5 times that of uniprocessor.

For HONE 370APs, sometimes I could get 2-processor throughput more than twice single processor thruput. HONE was heavily compute intensive APL applications ... although some would periodically do lots of I/O. The natural processor/cache affinity (improved MIP rate) increased thruput (along with having extremely short multiprocessor support pathlengths) ... keeping the compute intensive (non I/O) application execution on the AP processor w/o channels. misc. past posts mentioning (virtual machine based) HONE (US HONE & HONE clones around the world provided world-wide sales & marketing support)
https://www.garlic.com/~lynn/subtopic.html#hone

This got messed up in the early 3081 dyadic time-frame. Since ACP/TPF didn't have multiprocessor support (and it hadn't yet been decided to do 3083) ... VM was "enhanced" to try and improve ACP/TFP 3081 virtual machine thruput. For a lot of VM pathlength that used to be serialized with virtual machine execution ... there was an attempt to make it asynchronous ... running on the 2nd, presumably idle processor ... with lots of request queuing and processor "shoulder taping" (the increase in overhead theoretically offset by the reduction in ACP/TPF elapsed time). However, for customers that had been running fully loaded (non-ACP/TPF) multiprocessor operation ... the transition to this new release represented significant degradation (the increased request queuing and shoulder taping taking 10-15% of both processors).

Then there was a number of onsite visits at various large customers ... attempting to perform other kinds of tuning operations to mask the enormously increased multiprocessor overhead in the new release (all the shoulder taping also messed up the natural affinity characteristics ... motivating a large increase in explicit specified processor affinity).

old email mentioning large gov. TLA ... trying to provide a variety of performance improvements to offset the multiprocessor overhead increase:
https://www.garlic.com/~lynn/2001f.html#email830420

--
virtualization experience starting Jan1968, online at home since Mar1970

Drum Memory with small Core Memory?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Drum Memory with small Core Memory?
Newsgroups: alt.folklore.computers, comp.arch
Date: Tue, 26 Apr 2011 09:21:55 -0400
John Levine <johnl@iecc.com> writes:
But it wasn't. The 2303 drum did transfers twice as fast as a 2311, I think because the drum used several heads in parallel.

The 2303 was only 4MB, so it was soon obsolete, but it was pretty fast. Not as fast as the 650's drum, but still pretty fast.


re:
https://www.garlic.com/~lynn/2011f.html#56 Drum Memory with small Core Memory?
https://www.garlic.com/~lynn/2011f.html#59 Drum Memory with small Core Memory?

2303 was about the same transfer as 2314 .... it was the 2301 fixed-head drum ... looked very much like 2303 and about same capacity ... that transferred four heads in parallel (and had four times the transfer rate).

earlier post in thread mentions more about 2301 as paging drum. it still had rotational latency ... and cp67 delivered to univ in jan1968 did FIFO queuing and single transfer per operation (peaked about 80 4kbyte page transfers per second). I redid for rotational order and multiple requests per i/o operation ... and could get nearly 300 4kbyte page transfers per second (over mbyte/sec).

2311 was 156kbytes/sec, 2314 was 310kbytes/sec
https://en.wikipedia.org/wiki/IBM_magnetic_disk_drives

recent thread about 2305 (fixed-head disk ... follow-on to 2301)
https://www.garlic.com/~lynn/2011e.html#75 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like

and possibility of 2305 attached to 360s (other than 360/85). this post about spook base & igloo white .... mentions 2305 and 360/65s ... but it is much more likely he met 2301 (2305 controller supported rotational position sensing, and multiple exposures ... requiring 2880 block-mux channel ... first available on 360/85):
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Mixing Auth and Non-Auth Modules

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Mixing Auth and Non-Auth Modules
Newsgroups: bit.listserv.ibm-main
Date: 26 Apr 2011 07:28:14 -0700
jeff.holst@FISERV.COM (Jeff Holst) writes:
I think that when I was later in an MVS shop, our auditors used that same playbook, but I also think that they read slowly, as they seemed to find one new thing in the book each year.

when corporate came in for audit of SJR datacenter in the early 80s ... there was big dustup with the auditors over demo programs (aka "games") ... which should be eliminated from every system ... as not a "business use". Corporate had gone thru a cycle where the 3270 logon screen had "For Business Use Only" added. We managed to have that changed to "For Management Approved Use Only" ... where games actually served a very useful purpose ... giving people exposure to significantly better human interface experience ... that was hardly common in the period. We also used the argument that eliminating public games ... would just drive them underground with each person having private disguised versions.

6670s (copier3 with computer interface added) were appearing in every departmental area for distributed computer output. the 6670 driver had been modified to include a randomly selected quote on the separator pages. part of the audit was off-hours sweep of all the distributed printers ... looking for sensitive output that was left out/unattended. In one of the areas, the auditors found an output separator page with the following quote:
[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


... snip ...

the next day, the auditors tried to escalate an issue that we were purposefully ridiculing them.

In the wake of Enron, congress passed sarbanes-oxley that significantly increased audit requirements and penalties. A few years ago I was at a financial conference in europe of european corporate CEOs and exchange presidents ... major theme was that the (significant) SOX audit requirements and costs were leaking out into the rest of the world. There was semi-humorous reference to the country hosting the conference on sunday cnn gps program
http://globalpublicsquare.blogs.cnn.com/2011/04/24/rent-the-country-of-liechtenstein-for-70k-a-night/

My position was that the increased audit requirements wouldn't make any significant dent in fraud (was more likely just a full-employment favor to the audit industry by congress) and possibly only significant part of SOX was section on informants. It turns out that apparently GAO also thot something similar and was doing reports of review of public company financial filings showing uptick in fraudulent filings after SOX (problem with both the audits and SEC enforcement).

In congressional testimony by the person that had tried for a decade to get SEC to do something about Madoff, there was mention that tips turn up 13 times more fraud than audits and SEC didn't have a tip hotline, but had a 1-800 line for companies to complain about audits.

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM Selective Sequence Electronic Calculator

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The IBM Selective Sequence Electronic Calculator
Newsgroups: alt.folklore.computers
Date: Tue, 26 Apr 2011 11:10:10 -0400
Quadibloc <jsavard@ecn.ab.ca> writes:
I know the university I went to used a PDP-11 as a front-end communications processor on their IBM 360/67, allowing ASCII terminals to be connected to it, leading to cost savings and other advantages over using IBM terminals only.

They used MTS as an operating system as well, which had advantages over IBM's timesharing software, which was lacklustre... however, the API wasn't the same as that of OS/360, and so while we used IBM's Fortran and PL/I compilers, they had to be slightly modified to run under MTS.

I had wondered why, given that you could do timesharing even on a PDP 8/I system with OS/8, why I don't recall hearing of anyone simply using computers like the PDP 8/I, or larger timesharing systems like the SDS 940, for timesharing... but with the ability to submit batch jobs to an IBM mainframe (or, for that matter, a Control Data machine) that did the heavy computational lifting.


as previously mentioned ... 360/67 was supposedly for tss/360 ... but because of significant tss/360 issues ... several places developed their own implementations to utilize 360/67 virtual memory (like michigan did with MTS). reference to pdp8 as mts terminal controller:
http://www.eecis.udel.edu/~mills/gallery/gallery7.html
some more mts
http://www.eecis.udel.edu/~mills/gallery/gallery8.html

recent referencs to cp67 ... virtual machine/virtual memory system developed for 360/67 ...
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#72 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011e.html#84 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#6 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#12 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011f.html#36 Early mainframe tcp/ip support (from ibm-main mailing list)
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#51 US HONE Datacenter consolidation
https://www.garlic.com/~lynn/2011f.html#56 Drum Memory with small Core Memory?
https://www.garlic.com/~lynn/2011f.html#61 Drum Memory with small Core Memory?

done at the science center ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#545tech

cp67 installed at the univ. in jan1968 had terminal support for 1052 and 2741. the univ. had some number of tty/ascii terminals and I undertook to added tty/ascii support to cp67 ... consistent with the existing 1052&2741 support which leveraged the 2702 "SAD" command as part of doing dynamic terminal identification. I had hoped to have single "dial-in" number ("hunt-group") for all dialup terminals. My dynamic terminal-type support worked just fine for direct/leased lines ... but there was problem with dynamic dial-up lines. While it was possible to dynamically change line-scanner associated with each port/line using SAD command ... a short-cut in the 2702 hardwired the line-speed for each port.

this was somewhat motivation for the univ. to start clone controller (that would be able to do both dynamic terminal type and dynamic line speed) using an Interdata/3. The mainframe channel was reverse engineered and channel interface board built for the Interdata/3 ... programmed to simulate 2702 controller. Later four of us were written up for being responsible for some part of clone controller business ... misc. past posts
https://www.garlic.com/~lynn/submain.html#360pcm

past posts mentioning MTS
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004.html#47 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#25 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#18 IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
https://www.garlic.com/~lynn/2005g.html#56 Software for IBM 360/30
https://www.garlic.com/~lynn/2005k.html#20 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005s.html#17 winscape?
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2006d.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#31 MCTS
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006k.html#41 PDP-1
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006m.html#42 Why Didn't The Cent Sign or the Exclamation Mark Print?
https://www.garlic.com/~lynn/2006m.html#47 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006n.html#43 MTS, Emacs, and... WYLBUR?
https://www.garlic.com/~lynn/2006o.html#3 MTS, Emacs, and... WYLBUR?
https://www.garlic.com/~lynn/2006o.html#36 Metroliner telephone article
https://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007f.html#62 What happened to the Teletype Corporation?
https://www.garlic.com/~lynn/2007j.html#6 MTS *FS tape format?
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007q.html#15 The SLT Search LisT instruction - Maybe another one for the Wheelers
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at Lincoln Labs
https://www.garlic.com/~lynn/2007u.html#23 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007u.html#84 IBM Floating-point myths
https://www.garlic.com/~lynn/2007u.html#85 IBM Floating-point myths
https://www.garlic.com/~lynn/2007v.html#11 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007v.html#32 MTS memories
https://www.garlic.com/~lynn/2007v.html#47 MTS memories
https://www.garlic.com/~lynn/2008d.html#94 The Economic Impact of Stimulating Broadband Nationally
https://www.garlic.com/~lynn/2008h.html#44 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008q.html#65 APL
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2009k.html#1 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#70 An inComplete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009l.html#34 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009p.html#76 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2010g.html#34 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010j.html#37 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#86 Utility of find single set bit instruction?
https://www.garlic.com/~lynn/2011b.html#44 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Are Americans serious about dealing with money laundering and the drug cartels?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Apr, 2011
Subject: Are Americans serious about dealing with money laundering and the drug cartels?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011f.html#54 Are Americans serious about dealing with money laundering and the drug cartels?

others beside Gramm ... note treasury sec. ... formally of goldman ... helped with the citi deal and then went on to citi ... also served stint as ceo of citi.
https://en.wikipedia.org/wiki/Citigroup
http://www.nytimes.com/2008/04/27/business/27rubin.html
http://www.npr.org/templates/story/story.php?storyId=15995005
http://in.reuters.com/article/2009/01/09/us-chronology-rubin-sb-idINTRE5086XS20090109
http://politifi.com/news/Why-Robert-Rubin-and-Citibank-Execs-Should-Be-in-Deep-Trouble-1330381.html
http://www.reuters.com/article/2009/01/09/chronology-rubin-idUSN0931351920090109?pageNumber=1

also mentioned above as being part of the group blocking Born regarding commodity trading.
http://www.pbs.org/wgbh/pages/frontline/warning/
and while at citi tried to intervene on behalf of enron
http://www.truthdig.com/report/item/20080729_sucking_up_to_the_bankers/

bring it a little back to money laundering

Japan closes Citigroup branches
http://news.bbc.co.uk/2/hi/business/3666828.stm
Japan raps Citi for lax money laundering controls
http://www.reuters.com/article/2009/06/26/citigroup-japan-idUSN2528049820090626
Money laundering allegations hit Citibank Indonesia
http://www.asianewsnet.net/home/news.php?id=18570&sec=2

slightly humorous indirect money laundering from CNN GPS yesterday
http://globalpublicsquare.blogs.cnn.com/2011/04/24/rent-the-country-of-liechtenstein-for-70k-a-night/

the country had been on money laundering "black list" ... possibly the swiss banks had been outsourcing the actual money laundering transactions to the country next door.
http://www.indianexpress.com/Storyold/154519/

I was there a few years ago for a financial conference of European company and exchange CEOs (the conference theme focused on SOX audit substantial costs were leaking out into the rest of the world). The innkeeper made jokes about getting lots of people with business cards that read "dept. of money laundering" (implying that they were in charge of doing money laundering).

On the regulatory enforcement theme ... as mentioned upthread, GAO reports have shown that there was even uptick in public company fraudulent financial reports after SOX ... so other than full employment favor for audit companies, SOX appears to have little effect.

if one was cynical ... then regarding anti-money laundering outside the country ... there might be some comment about leveraging the gov. to eliminate competition

related to above about lobbying directed at preventing walmat (and others) cutting down on its interchange fees:

Swiped: Banks, Merchants And Why Washington Doesn't Work For You
http://www.huffingtonpost.com/2011/04/28/swipe-fees-interchange-banks-merchants_n_853574.html

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM Selective Sequence Electronic Calculator

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The IBM Selective Sequence Electronic Calculator
Newsgroups: alt.folklore.computers
Date: Wed, 27 Apr 2011 08:37:20 -0400
Charles Richmond <frizzle@tx.rr.com> writes:
It's a shame that anyone would be mocked for making stupid mistakes with their FORTRAN program. IME most mistakes in the beginning of a program's life... *are* stupid mistakes, especially if you really understand how to program in FORTRAN. There are just *so* many details that it is easy to forget one of them and screw up.

In the olden days at college, if I checked over someone's malfunctioning FORTRAN program, I always looked for "punched past column 72" errors first. One needs to rule out the "stupid mistakes" before one begins to bang one's head against the wall trying to analyze the algorithms. ;-)


re:
https://www.garlic.com/~lynn/2011f.html#63 The IBM Selective Sequence Electronic Calculator

then there is the opposite ... graduate student from some other dept ... was constantly coming by and demanding that somebody fix their programs so they ran correctly.

--
virtualization experience starting Jan1968, online at home since Mar1970

Bank email archives thrown open in financial crash report

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Apr, 2011
Subject: Bank email archives thrown open in financial crash report
Blog: Financial Crime Risk, Fraud and Security
Bank email archives thrown open in financial crash report
http://www.computerworlduk.com/news/it-business/3274277/bank-email-archives-thrown-open-in-devastating-financial-crash-report/1

from above:
The internal email archives of ratings agencies and banks have been thrown open as part of a major government investigation, demonstrating the risk appetite of large Wall Street institutions before the global economic crash.

... snip ...

A lot of this was in testimony in the fall2008 congressional hearings into the rating agencies

During the fall2008 congressional hearings into the rating agencies ... the comment was made that the rating agencies might blackmail the gov. into not prosecuting with the threat of downgrading the gov. credit rating .... the gov. credit rating has been in the news today.

There had been some securitized mortgages (CDOs) in the S&L crisis with doctored supporting documents for fraud. In the late 90s, we were asked to look at what could be done for integrity/assurance of securitized mortgages (CDOs) supporting documents.

In the first of this century, loan originators found they could pay rating agencies for triple-A ratings and immediately unload every loan (without regard to quality or borrower's qualifications). Speculators found that no-down, no-documentation, 1% interest payment only ARMs could make 2000% ROI buying&flipping properties in parts of the country with 20%-30% inflation.

Buyers of triple-A rated toxic CDOs didn't care about supporting documents, they were buying purely based on the triple-A rating. Rating agencies no longer cared about supporting documentation because they were being payed to give triple-A rating. Supporting documentation just slowed down loan originator's process of issuing loans. Since nobody cared about supporting documentations, they became superfluous ... which also resulted in there no longer being issue about supporting documentation integrity/assurance.

Lending money used to be about making profit on the loan payments over the life of the loan. With triple-A rated toxic CDOs, for many, it became purely a matter of the fees & commissions on the transactions and doing as many as possible. There were reportedly $27T in triple-A rated toxic CDO transactions done during the bubble ... with trillions in fees & commissions disappearing into various pockets.

Possibly aggregate 15%-20% take on the $27T ($5.4T) as the various transactions wander through the infrastructure (starting with original real estate sale). Reports were that wall street tripled in size (as percent of GDP) during the bubble. Also NY state comptroller had report that aggregate wall street bonuses spiked over 400% during the bubble.

and as been referenced in other discussions
http://blogs.forbes.com/neilweinberg/2011/04/14/corrupt-bank-oversight-is-creating-new-immoral-hazard/
and
http://www.nytimes.com/2011/04/14/business/14prosecute.html?_r=2&hp

there are various players with other motivations for not wanting to see prosecution (like many members of congress possibly having taken significant favors).

much of this came up in the fall2008 congressional hearings into the rating agencies (reference that rating agencies might blackmail the gov into not prosecuting with threat of credit downrating):

S&P's Credibility Under Fire As Agency Issues US Debt Warning; Bipartisan Denate Report Cited S&P for Enabling US Mortgage Meltdown
http://abcnews.go.com/Politics/standard-poors-credibility-fire-us-debt-warning/story?id=13407823

Blog reference about lots of the news this month
http://www.winningprogressive.org/culprits-of-2008-financial-collapse-identified-world-ignores-5

has been to distract from this release:

"Wall Street and the Financial Crisis: Anatomy of a Financial Collapse"
http://levin.senate.gov/newsroom/supporting/2011/PSI_WallStreetCrisis_041311.pdf

Part of the overly prescriptive legislation may be show/obfuscation because they aren't addressing splitting apart the risky investment banking from safe&soundness of depository institutions (where the combination resulted in misaligned business processes and people motivated to do the wrong thing ... a side-effect of repeal of Glass-Steagall). Furthermore to be effective, legislation and regulation requires enforcement ... and while a lot was eliminated leading up to the financial mess ... there was still a whole lot that just weren't being enforced.

--
virtualization experience starting Jan1968, online at home since Mar1970

Old email from spring 1985

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Old email from spring 1985
Newsgroups: alt.folklore.computers
Date: Thu, 28 Apr 2011 09:50:30 -0400
Date: 04/09/85 16:23:16
From: wheeler
To: somebody at SJR

re: talk; will have abstract shortly ...

referenced note:

Lynn,

I am running a seminar on distributed systems here in the CS department and I would very much like to have you present the multi-micro-370 project/machine to us. This might create some enthuziasm among local researchers and managers for your project, and you might get some usefull feedback. There is a free slot on April 16 at 10 a.m. Could you make it?


... snip ... top of post, old email index

old reference to packaging large number of 370 & 801 chips in rack
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
more recent reference:
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???

somewhat back-to-the-future with latest mainframe announcements.
http://www-03.ibm.com/systems/z/hardware/zenterprise/

Date: 04/09/85 16:31:14
From: wheeler
To: kingston

re: replies; didn't bother to reply since most of my comments about global LRU and >16meg are detailed in VMPERF SCRIPT. Extra corswap bits is easy fall-out of the 1.5bit algorithm change since it scavenges the high-byte of CORPGPNT for (max) eight extra psuedo reference bits already.


... snip ... top of post, old email index

past email mentioning global LRU
https://www.garlic.com/~lynn/lhwemail.html#globalru

Date: 04/09/85 17:44:01
From: wheeler
To: raleigh

fyi ... HSDT was presented to NSF ... and they are planning on using it for backbone to connect all the super-computer centers. Over the next 2-6 weeks will be presenting it to numerous other government & research organizations.

... forgot, using SDLC chip for serialization/deserialization ... the fastest one on the market only goes 2 megabits. HSDT won't use SDLC because it will have to run at least 6 megabits ... and going to higher rates later. Hardware people tell me that because of the complexity of SDLC, chips for SDLC will continuelly lag most of the various other alternatives ... either being much slower &/or more expensive.


... snip ... top of post, old email index, NSFNET email

other old email mentioning hsdt
https://www.garlic.com/~lynn/lhwemail.html#hsdt
and nsfnet
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

Date: 3 May 1985, 11:20:56 PST
To: wheeler
From: somebody at SLAC

I very much enjoyed the talk you gave on PAM in Toronto. I have done some work at SLAC trying to make CMS do I/O faster, but PAM sounds like a much better general solution. I would like to read the two Research reports you mentioned. Could you tell me how I might get a copy?


... snip ... top of post, old email index

misc. past posts mentioning paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

post from earlier this year
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

with old email referencing using the low-level API to redo CP's spool file implementation:
https://www.garlic.com/~lynn/2011.html#email870306

... and getting ready for the move up the hill (from bldg. 28) to Almaden:

Date: 3 May 1985, 16:13:25 PDT
Subject: Friday

Busy week at 028 - Almaden planning swings into high gear, with the computer lab being occupiable in less than 5 weeks. How about going to Emperor Norton's and discussing some OTHER topics? (Or maybe just eating and drinking ourselves into a frenzy?)


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

program coding pads

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: program coding pads
Newsgroups: alt.folklore.computers
Date: Thu, 28 Apr 2011 11:03:16 -0400
Lawrence Statton <lawrence@cluon.com> writes:
I learned to type quite early -- around 11 years old. I could never master writing cursive -- well I could WRITE IT, but nobody but me could read what I had written. I could not slow down enough to make the letterforms distinct. I also never really got the fine-motor control to draw little loopy things.

that was about the time I learned ... but I taught myself ... some old stuff found in dump ... past ref
https://www.garlic.com/~lynn/2002d.html#34 Jeez, garlic.com -
https://www.garlic.com/~lynn/2005e.html#63 Mozilla v Firefox
https://www.garlic.com/~lynn/2005f.html#2 Mozilla v Firefox

--
virtualization experience starting Jan1968, online at home since Mar1970

how to get a command result without writing it to a file

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to get a command result without writing it to a file
Newsgroups: comp.lang.rexx
Date: Thu, 28 Apr 2011 13:30:44 -0400
"Dave Saville" <dave@invalid.invalid> writes:
Not really :-) IBM Mainframe MVS, and I guess its current incarnation, used to have page tables that were themselves pageable. Seemed to work then.

pagetables for segments that don't have corresponding virtual pages in real memory ... then it was possible to deallocate the actual pagetable (no info that needs to be preserved) and turn the invalid bit on in the segment table entry (i.e. for segment pagetables with no corresponding virtual page in real storage ... the information reduces all pages are invalid which can be summarized by turning on the invalid bit in the segment table entry ... and deallocating the corresponding pagetable storage).

I had earlier "paged" other storage associated with virtual memory infrastructure ... as had other virtual memory implementations ... especially single-level-store and paged-mapped architectures ... where some of these other tables were on par with file directory information (a little like VTOC ttr). I had also made the change to "page" less-critical portions of the fixed kernel (both reducing fixed storage requirements). Just had to have the dependency graph ... so there wasn't deadly embrace ... where something had been paged out that was required in order to perform restore/page-in.

note that 3090 put something like pagefile in RAM .... called expanded store. basically the processor memory technology packaging couldn't get all of the storage they wanted within latency for processor cache miss. there had been "electronic" (paging) disks (prior to 3090) where additional memory was used to simulate fixed-head disks.

3090 expanded store was an extra wide bus with a synchronous move instruction (push pages out to expanded store or pull pages back in from expanded store) ... the elapsed time trade-off for the synchronous operation was significantly shorter than the pathlength elapsed time for performing asynchronous i/o operation.

for later machines, memory technology packaging enabled as much memory as you wanted for processor use ... but "LPAR" configuration (still) allowed some of it to be allocated for "expanded store" simulation. all other things being equal ... directly being able to always access a page is more efficient than moving it back&forth. However, there were some system idiosyncrasies (and/or bugs) ... that allowed configurations to perform better with some "expanded store" ... rather than everything directly addressable.

A simple scenario was system limitations of 2gbyte addressability and real memory much larger than 2gbyte.

The 3090 expanded store bus was also used for attaching HIPPI channels/devices (100mbyte/sec) ... since standard 3090 i/o facilities couldn't operate that fast. HIPPI was cut into the side of the expanded store bus and channel I/O was performed with peek/poke paradigm (using "reserved" expanded store addresses).

misc. past posts mentioning expanded store:
https://www.garlic.com/~lynn/2003p.html#41 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006s.html#16 memory, 360 lcs, 3090 expanded store, etc
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was: Real core)
https://www.garlic.com/~lynn/2006s.html#20 real core
https://www.garlic.com/~lynn/2007c.html#23 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007f.html#18 What to do with extra storage on new z9
https://www.garlic.com/~lynn/2007g.html#67 Unusual Floating-Point Format Remembered?
https://www.garlic.com/~lynn/2007o.html#26 Tom's Hdw review of SSDs
https://www.garlic.com/~lynn/2007p.html#11 what does xp do when system is copying
https://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
https://www.garlic.com/~lynn/2008.html#49 IBM LCS
https://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#8 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2010.html#86 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2011e.html#62 3090 ... announce 12Feb85

--
virtualization experience starting Jan1968, online at home since Mar1970

Z chip at ISSCC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z chip at ISSCC
Newsgroups: comp.arch
Date: Thu, 28 Apr 2011 16:31:55 -0400
Robert Myers <rbmyersusa@gmail.com> writes:
Misunderstandings abound. In the absence of better information, I blame the outsized influence of IBM for the fact that (with the exception of the CDC/Cray offshoot), machines have been designed for commercial transaction processing, where the misery of cache coherency seems inevitable. Technical computing has always been an afterthought (except when it comes to bragging rights), and I have a hard time imagining circumstances where cache coherency helps technical computing, with the possible exception of the operating system.

john did 801/risc in mid-70s ... I claim to go to the opposite in hardware complexity from "Future System" ... including no cache consistency ... even between I and D caches ... which requires a little bit of help for a loader ... where some of the cache lines for a program (being loaded) may have been modified and be in the D-cache (requiring flush to memory and invalidate on off-chance might be in I-cache). This can show up JIT also ... since stuff that had been generate data (modified in the D-cache) ... has to now show up in the I-cache.

It was one of the reasons that we did cluster scale-up with large number of processors ... misc. old email mentioning cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

later there was somerset/aim ... with apple & motorola (the executive we reported to when doing cluster scale-up ... went over to head up somerset) ... which did some work on cache consistency. misc. old email 801, risc, romp, rios, somerset, etc
https://www.garlic.com/~lynn/lhwemail.html#801

misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

misc. past posts mentioning cluster stuff
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

how to get a command result without writing it to a file

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to get a command result without writing it to a file
Newsgroups: comp.lang.rexx
Date: Thu, 28 Apr 2011 18:12:24 -0400
"Dave Saville" <dave@invalid.invalid> writes:
This sounds interesting history. What did you do if you don't mind me asking?

re:
https://www.garlic.com/~lynn/2011f.html#69 how to get a command result without writing it to a file

the science center had done virtual machine operating system called cp40 on a specially modified 360/40 with virtual memory hardware. when standard 360/67 (w/virtual memory) became available, the 360/40 was replaced with 360/67 and cp40 morphed into cp67.

3 people from the science center came out to univ (where i was undergraduate) in jan68 to install cp67 (the univ. had gotten a 360/67 for running tss/360 .... which wasn't going well so it was running 360/67 in real address mode with os/360). misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

while at univ I rewrote significant sections of cp67 ... lots of enhancements to pathlength, resource management, page replacement, i/o operations, etc. old post with part of presentation I have at '68 share meeting later that year (mostly about pathlength changes):
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

i re-arranged the kernel so that all the routines to be paged were located contiguously at the end of the kernel and organized into 4kbyte page chunks. I created a dummy virtual memory table that represented the kernel image and drew the line at the start of "pageable" kernel ... and allowed all of those pages to be "paged". I then modified the routine that handled kernel calls/returns ... so that addresses above the "line" would be page fetched & locked (using convention for doing simulated virtual machine I/O) ... and returns would decrement the lock count.

for virtual machine ... there were two types of tables for each virtual page. I created another dummy virtual memory (similar to what I did for paging kernel) for each virtual machine ... where I would page virtual machine control information. Each virtual machine virtual segment had a pagetable and a "swaptable" ... the "swaptable" contained control information about each virtual page ... the location of the virtual page on disk, shadow image of the virtual page storage keys, and some misc. other stuff. when all virtual pages for a segment were no longer in storage, i could disolve the pagetable (using the segment invalid flag), and page-out the corresponding swaptable.

some misc. old email about page replacement algorithm
https://www.garlic.com/~lynn/lhwemail.html#globallru

some old email about porting changes from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

part of the support mentioned above ... was moving cms paged-mapped filesystem implementation from cp67 to vm370. misc. past posts mentioning cms paged-mapped filesystem support
https://www.garlic.com/~lynn/submain.html#mmap

at various points, some of the stuff would periodically leak out into various product releases.

some recent posts about paging, fixed-head devices, electronic disks
https://www.garlic.com/~lynn/2011f.html#56 Drum Memory with small Core Memory?
https://www.garlic.com/~lynn/2011f.html#59 Drum Memory with small Core Memory?
https://www.garlic.com/~lynn/2011f.html#61 Drum Memory with small Core Memory?

slightly bring it back to rexx ... very early days of rexx (when it was still called rex) ... i wanted to demonstrate that rex wasn't just another pretty scripting language. For the demo, I would rewrite the problem-determination/dump-analyzer application implemented in assembler (had a whole dept. in endicott supporting it) in rex ... doing it in half-time over period of 3months and it would have ten times the function and run ten times faster. misc. past posts mentioning dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

for other random ... from old dmsrex h-assembler listing
DMSREX assembled from REXPAK (2099 records, 04/15/83 17:19:55)

--
virtualization experience starting Jan1968, online at home since Mar1970

program coding pads

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: program coding pads
Newsgroups: alt.folklore.computers
Date: Fri, 29 Apr 2011 18:27:10 -0400
Charles Richmond <frizzle@tx.rr.com> writes:
At a PPoE, I had to write a program that manipulated a hardware device. We were doing the work for a large company; this company had purchased a device for us to test with and docs were sent with it. You had to put certain values in certain registers on the device and then the right things would happen. I coded to the documentation, and it did *not* work!

Eventually, I called up the company and asked to speak to the firmware guy that created the device. He told me the *right* things to do over the phone, and I penciled up my copy of the docs and wrote the code. Otherwise, I would *never* have gotten it to work!


they let me wander around bldgs. 14&15 and play disk engineer. the disk development environment were using stand-alone dedicated, scheduled mainframe testing time for the different testcells (7x24 scheduling). At one point, they had tried to use MVS to be able to multiple, concurrent testing in operating system environment ... but found that even with a single testcell ... MVS had 15min MTBF (requiring reboot). I offered to rewrite I/O supervisor to be bullet proof and never fail ... so they could do anytime, on-demand concurrent testing (significantly improving productivity). I not only had to figure out how things worked correctly ... but also all the ways that they might work incorrectly. Eventual side-effect was that I frequently got called in to diagnose development issue when things weren't working right (sometimes starting out with the claim that it was actually a software issue and my fault).

misc. past posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 Apr, 2011
Subject: Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.
Blog: Old Geek Registry
re:
http://lnkd.in/sauZ4H

do you mean this one:
http://www.stanford.edu/dept/its/support/wylorv/

this has little ORVYL history ... apparently having been done on 360/67 (360/65 with virtual memory hardware) at stanford in the late 60s (similar to what Univ. of Mich. did for MTS on their 360/67) ... see section 1.3:
http://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML

above mentions Richard Carr did substantial work on ORVYL-II (possibly migration from 360/67 to 370 w/virtual memory support).

In the early 80s, Carr was at Tandem and also finishing up his PHD at stanford in global LRU page replacement algorithm (and encountering significant academic resistance from some corners regarding global LRU vis-a-vis local LRU).

This is old mail
https://www.garlic.com/~lynn/2006w.html#email821019

trying to provide supporting assistance in this post
https://www.garlic.com/~lynn/2006w.html#46

Jim Gray had approached me at SIGOPS (Asilomar, dec81) about providing supporting evidence ... since I had done something similar as undergraduate in the 60s on cp67. Even tho the references was to work I had done as undergraduate ... it took me nearly a year to get corporate approval to send the reply (hopefully it wasn't because corporation taking sides in the academic dispute).

additional global LRU topic drift ... old email
https://www.garlic.com/~lynn/lhwemail.html#globallru

recent MTS post in a.f.c. with some URLs to old (MTS) pictures/lore
https://www.garlic.com/~lynn/2011f.html#63

another recent post in a.f.c. mentioning having hacked 2741&tty terminal support into HASP with a conversational editor that implemented CMS editor syntax (rewrite from scratch) ... also references other HASP, Future System, SVS & MVS history
https://www.garlic.com/~lynn/2011d.html#73

wiki reference (for orvyl, wylbur and milton):
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR

refers to SLAC & CERN continued to use into the 90s. SLAC & CERN were sister organizations sharing lots of software. SLAC also was host of monthly (vm/cms user group) BAYBUNCH meetings.

for other topic drift ... reference to (vm/cms) SLAC being 1st webserver outside CERN
http://www.slac.stanford.edu/history/earlyweb/history.shtml

story of SGML involving into HTML at CERN
http://infomesh.net/html/history/early/

GML was invented in 1969 ... misc. past posts
https://www.garlic.com/~lynn/submain.html#sgml

at the science center ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#545tech

before morphing into ISO standard SGML in the late 70s (and more than a decade later morphing into HTML).

Original Orvyl was operating system for "bare machine" (360/67 had virtual memory hardware suppot) ... later Orvyl appears to have been redone for hosting under MVS (more like CICS handling all its own subtasks?). Various flavors of Wylbur were also done that could run w/o Orvyl.

This is recent post about Adventure ... which appeared to have come over from Stanford's PDP machine to Stanford's Orvyl (& redone from PDP fortran to PLI) which could run in standard TSO environment (as well as CMS OS/360 simulation) with the asm code (converted tread/twrite to tget/tput):
https://www.garlic.com/~lynn/2011b.html#41

In 70s, I had gotten CMS Fortran version ... which appeared to have come over from Stanford's PDP machine to TYMSHARE's PDP machine and then to TYMSHARE vm370/cms.

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM Selective Sequence Electronic Calculator

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The IBM Selective Sequence Electronic Calculator
Newsgroups: alt.folklore.computers
Date: Sun, 01 May 2011 09:35:57 -0400
jmfbahciv <See.above@aol.com> writes:
OTOH, you no longer have to work for the 7 sisters to get access to stand-alone gear. In the 60s and early 70s, if you wanted to play with computers, you had to find a job which allowed you to get in that locked machine room or go to work for a manufacturer who had brand new gear never touched by human coders.

when undergraduate in the 60s ... univ. turned over responsibility for operating system support to me ... at the time they would shutdown datacenter at 8am sat. morning and wouldn't start again until 8am mon ... I got everything in the machine room for 48hrs straight on the weekends ... 48hrs w/o sleep made monday classes difficult.

--
virtualization experience starting Jan1968, online at home since Mar1970

Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 2 May, 2011
Subject: Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.
Blog: Old Geek Registry
re:
http://lnkd.in/sauZ4H
and
https://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.

HONE (HONE provided online worldwide sales&marketing support, the US datacenter's had been consolidated in Palo Alto in the mid-70s in bldg. that is now next door to Facebook bldg), SLAC, Tymshare, Stanford were all in close physical proximity. I would go by some number ... especially on the day of the monthly Baybunch meetings at SLAC.

Tymshare had started hosting the online VMSHARE computer conferencing Au76 ... archives here:
http://vm.marist.edu/~vmshare/

This is old email about dropping by Tymshare and getting a demo of Adventure ... and then making arrangements for source. However, somebody in the UK walked a copy from a univ. machine over to a corporate machine and sent me a copy over the internal network
https://www.garlic.com/~lynn/2006y.html#email780405
https://www.garlic.com/~lynn/2006y.html#email780405b

The internal network was larger than the arpanet/internet from just about the beginning until late '85 or possibly early '86 ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

I had also made provisions with Tymshare to get monthly copies of the VMSHARE (and later PCSHARE) files and made them available on the internal network ... misc. old email mentioning VMSHARE
https://www.garlic.com/~lynn/lhwemail.html#vmshare

I also got blamed for online computer conferencing on the internal network in the late 70s and early 80s.

--
virtualization experience starting Jan1968, online at home since Mar1970

PIC code, RISC versus CISC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PIC code, RISC versus CISC
Newsgroups: comp.arch
Date: Mon, 02 May 2011 11:04:58 -0400
Michael S <already5chosen@yahoo.com> writes:
I think, you have narrower definition of PIC than the one used by George Neuner (and certainly narrower than what Andy Glew was talking about, but he didn't used the term PIC). Andy and George were talking about the case when not only code but the data segment as well (or, may be, only data segment) starts at virtual address that is unknown at the link time. On x386 position-independent data access is not easy and have significant cost in code size. Depending on specifics of the code it could also incur a measurable impact in execution speed although I'd guess that more often it does not. In theory it all should be easier on [i]AMD64, how much difference it makes in practice I don't know.

tss/360 supported "PIC" for 360/67 ... but os/360 conventions didn't.

I had done page-mapped filesystem for cp67/cms (and ported to vm370/cms) some past posts
https://www.garlic.com/~lynn/submain.html#mmap

with semantics that allowed for "PIC" (arbitrary location for both code & data) ... however, much of cms adopted os/360 applications and conventions (using some amount of simulation of os/360 functions). as a result, it prevented effective use of "PIC" w/o a lot of hacking ... misc. past posts mentioning painful hacking to achieve "PIC" with code that was predominately os/360 oriented
https://www.garlic.com/~lynn/submain.html#adcon

--
virtualization experience starting Jan1968, online at home since Mar1970

Overloaded acronyms

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Overloaded acronyms
Newsgroups: bit.listserv.ibm-main
Date: 2 May 2011 08:40:19 -0700
at one point there was almost PCO (personal computing option) ... sort of TSO for VS/1 ... however it was eventually pointed out that PCO was also initials for political party in europe ... and PCO morphed into VS/PC.

there was one plan to have VS/1 machines already preloaded with vm/cms (sort of like early flavor of LPARs) ... and using CMS as the interactive component. PCO was being positioned as an alternative. The PCO group had a "simulator" showing PCO performance something like ten times that of vm/cms ... their simulation group would "run" some number of "benchmarks" ... and then the vm/cms group were asked to perform similar (real) benchmarks (taking up a significant percentage of all vm/cms resources on the benchmarks ... taken away from doing actual development). When PCO was finally operational ... it turned out to be slower than vm/cms (but they managed to waste a significant percentage of vm/cms development resources on the fictitious benchmarks)

various internal politics blocks the strategy to preload vm/cms on every mid-range machine. then in the wake of the death of Future System effort ... the MVS/XA effort managed to convince corporate to completely kill-off vm/cms (shutting down the development group and moving everybody to POK to support MVS/XA ... with claim that they wouldn't otherwise make MVS/XA ship schedule). Endicott eventually managed to save the vm/cms product mission ... but had to reconstitute a development group from scratch. misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 2 May, 2011
Subject: Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.
Blog: Old Geek Registry
re:
http://lnkd.in/sauZ4H
and
https://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past.

??? ibm-main mailing list started on bitnet in the 80s ... post of mine from a couple yrs ago
http://www.mail-archive.com/ibm-main@bama.ua.edu/msg79154.html
also archived here:
https://www.garlic.com/~lynn/2008k.html#85

bitnet wiki page
https://en.wikipedia.org/wiki/BITNET

using listserv software
http://www.lsoft.com/corporate/history_listserv.asp

started on EARN (version of bitnet in europe). old email about getting EARN going
https://www.garlic.com/~lynn/2001h.html#email840320

past email mentioning bitnet &/or earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

listserv was subset of function of the (earlier) toolrun facility developed internally for the internal network ... misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

I had been blamed for online computer conferencing on the internal network in the late 70s and early 80s. Folklore is that when the executive committee (chairman, ceo, pres, etc) was informed of online computer conferencing (and the internal network), 5out6 wanted to fire me.

From IBM Jargon
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

DCSS ... when shared segments were implemented in VM

From: lynn@garlic.com (Lynn Wheeler)
Date: 3 May, 2011
Subject: DCSS ... when shared segments were implemented in VM
Blog: z/VM
re:
http://lnkd.in/XmFjAv

recent thread in comp.arch with topic drift on "PIC code" (aka "position independent") ... the base implementation (from which DCSS took a small subset) allowed for position independent:
http://groups.google.com/group/comp.arch/browse_thread/thread/45305e22397b01de/8a4fc64796459cd?q=group:comp.arch+insubject:PIC#08a4fc64796459cd

my post in thread:

tss/360 supported "PIC" for 360/67 ... but os/360 conventions didn't.

I had done page-mapped filesystem for cp67/cms (and ported to vm370/cms) some past posts
https://www.garlic.com/~lynn/submain.html#mmap

with semantics that allowed for "PIC" (arbitrary location for both code & data) ... however, much of cms adopted os/360 applications and conventions (using some amount of simulation of os/360 functions). as a result, it prevented effective use of "PIC" w/o a lot of hacking ... misc. past posts mentioning painful hacking to achieve "PIC" with code that was predominately os/360 oriented
https://www.garlic.com/~lynn/submain.html#adcon

....

also archived here
https://www.garlic.com/~lynn/2011f.html#76

--
virtualization experience starting Jan1968, online at home since Mar1970

TSO Profile NUM and PACK

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TSO Profile NUM and PACK
Newsgroups: bit.listserv.ibm-main
Date: 3 May 2011 10:14:48 -0700
PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
I've long wondered, if sequence numbers are so valuable, why haven't they spread outside the progeny of unit record systems?

cms multi-level source update infrastructure relied on sequence numbers ... started out with cp67/cms and the cms "update" command which applied a single update file ... used control commands that inserted, replaced, deleted based on sequence numbers of the source file ... output file typically treated as temporary for compile/assemble.

at the univ., I was making so many cp67/cms source changes that I created a pre-processor for update ... that added extra field to the insert&replace control statements (aka "$") that would generate sequence numbers of the new lines (otherwise they had to be manually entered/typed).

later in early 70s ... there was large "exec" wrapper that supported multiple updates in specified sequence. there was a joint development effort with endicott that added 370 virtual machine simulation to cp67 (that ran on 360/67) ... including new instructions and virtual memory support that had several differences from 360/67.

There was "base" set of local enhancements to cp67 ... referred to as the "L" updates ... then could apply the "H" updates to provide option for 370 virtual machines (in addition to 360 virtual machines), and then could apply the "I" updates which modified cp67 to run on 370 machine (rather than 360/67).

"cp67i" was running regularly in 370 virtual machine for a year before the first 370 engineering machine with virtual memory hardware support was operational (a 370/145 in endicott) ... in fact, booting "cp67i" on the engineering machine was part of early validation test for the hardware. turns out boot failed ... because of "errors" in the hardware implementation (cp67i was quickly patched to correspond with incorrect hardware ... and then booted successfully).

By the time of vm370/cms, the multi-level update conventions ... had been incorporated into CMS update command (drastically reducing the exec wapper) and various editors. Editors were also modified to have option to generate edit session saved changes as an incremental update file (as opposed to replacing the original file with the changes).

There is folklore about HASP/JES2 group had moved to cms source development process ... which resulted in various kinds of problems for exporting into standard POK product release environment.

In the mid-80s, Melinda had requested anybody with a copy of the original cp67/cms multi-level update implementation. It turns out that I had complete set on archived tapes in the Almaden datacenter tape library. Her request was timely since a couple months later the Almaden datacenter had an operations problem with mounting random tapes as scratch (destroying large number of tapes, including ones with my archived info from the 70s ... in some cases multiple tapes with replicated copies ... including those with large amount of cp67/cms files).

old email exchange with Melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

Melinda's home page has moved:
http://www.leeandmelindavarian.com/Melinda#VMHist

I had done kindle conversion of her history ... which she now has up:
http://www.leeandmelindavarian.com/Melinda/neuvm.azw

cms update command reference:
http://publib.boulder.ibm.com/infocenter/zos/v1r10/topic/com.ibm.zos.r10.asmk200/ap5cms8.htm

xedit cms command reference (including mention of update option support)
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/topic/com.ibm.zvm.v54.dmsb6/xco.htm

note that univ source update uses "down-dates" ... i.e. the "current" source file includes all changes ... but there are history files that allows changes to be "regressed" to earlier versions. the cms "up-dates" process would freeze the original source (for some period of time) and have sequence of incremental source updates that would be applied in sequence to arrive at most up-to-date file to be compiled/assembled.

misc. past posts mentioning cms update command:
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003e.html#38 editors/termcap
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003k.html#47 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2005f.html#44 DNS Name Caching
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2006n.html#45 sorting
https://www.garlic.com/~lynn/2006o.html#19 Source maintenance was Re: SEQUENCE NUMBERS
https://www.garlic.com/~lynn/2006o.html#21 Source maintenance was Re: SEQUENCE NUMBERS
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2007b.html#7 information utility
https://www.garlic.com/~lynn/2009h.html#48 Book on Poughkeepsie
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2009s.html#37 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010k.html#13 Idiotic programming style edicts
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

TSO Profile NUM and PACK

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: TSO Profile NUM and PACK
Newsgroups: bit.listserv.ibm-main
Date: 3 May 2011 11:45:35 -0700
re:
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK

Note: UNIX traces some of its history back to CTSS by way of MULTICS done on 5th flr of 545 tech sq. VM370/CMS also traces history back to CTSS by way of CP67/CMS and CP40/CMS done (at science center) on 4th flr of 545 tech sq. misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

example is UNIX (document formating) runoff/roff looks very much like CTSS "runoff". original CMS (document formating) "script" also looked very much like CTSS "runoff" ... this was before GML was invented at the science center in 1969 and GML tag processing added to CMS "script". misc. past posts mentioning GML &/or SGML
https://www.garlic.com/~lynn/submain.html#sgml

CTSS reference:
http://www.multicians.org/thvv/7094.html

Discusses some of CTSS relationship to CP/CMS, MULTICS, and UNIX (mentions that TSO is in no way related):
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

Discusses some of CP/CMS relationship to CTSS
https://en.wikipedia.org/wiki/History_of_CP/CMS

Multics reference
http://www.multicians.org/general.html

Unix and Multics reference
http://www.multicians.org/unix.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Bank email archives thrown open in financial crash report

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 May, 2011
Subject: Bank email archives thrown open in financial crash report
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011f.html#66 Bank email archives thrown open in financial crash report

The visible hand; In MIT talk, Eliot Spitzer defends role of government in regulating markets, claims economy still 'on the precipice' of deep problems.
http://web.mit.edu/newsoffice/2011/spitzer-talk-0428.html

Did Deutsche Bank Really Hide Its Bad Audits In A Closet?
http://blogs.forbes.com/halahtouryalai/2011/05/03/did-deutsche-bank-really-hide-its-bad-audits-in-a-closet/

Justice Department, SEC Probing Senate Findings on Goldman
http://www.bloomberg.com/news/2011-05-03/levin-report-accusing-goldman-of-deception-referred-to-u-s-justice-sec.html

TV business news hitting the Goldman item this morning ... somewhat about whether SEC will actually follow through with anything significant about Goldman (in contrast to doing very little for the past decade).

There are also some references that house may be able to significantly hinder SEC from doing anything by cutting its budget (regardless of what senate is trying to accomplish)
http://dealbook.nytimes.com/2011/05/03/u-s-regulators-face-budget-pinch-as-mandates-widen/

In the Madoff hearings, the person that had tried unsuccessfully for a decade to get SEC to do anything about Madoff mentioned that part of SEC problem was that they are all lawyers with little training in financial forensics (no skill in creating a case). He was also asked about the need for new regulation. The comment was that while new regulation may be needed, much more important was transparency and visibility (that regulation and enforcement is extremely more difficult ... and costs go through the roof when there isn't transparency and visibility). This is similar to other testimony about costs/regulation going skyhigh when business processes are misaligned and people are motivated to do the wrong thing. Changing the environment and providing transparency and visibility drastically reduce the cost and amount of regulation.

--
virtualization experience starting Jan1968, online at home since Mar1970

program coding pads

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: program coding pads
Newsgroups: alt.folklore.computers
Date: Wed, 04 May 2011 10:20:18 -0400
ArarghMail105NOSPAM writes:
You can't get a leased line T1/E1 connection? Those usually have some sort of QOS enforcement, or at least they used to. Haven't looked at it in a few years.

If I could afford it, I would choose a T1 over any sort of DSL, any day.


decade old post about T1 only costing $1200/month
https://www.garlic.com/~lynn/2000f.html#49 Al Gore and the Internet (Part 2 of 2)

reference to after a move (not long after above post) ... and not being able to get either cable or dsl ... being told that they would be happy to install T1 frame-relay at $1200/month.
https://www.garlic.com/~lynn/2008d.html#94 The Economic Impact of Stimulating Broadband Nationally

--
virtualization experience starting Jan1968, online at home since Mar1970

program coding pads

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: program coding pads
Newsgroups: alt.folklore.computers
Date: Wed, 04 May 2011 11:08:56 -0400
scott@slp53.sl.home (Scott Lurndal) writes:
Why? Even bonding two T1's only yields 3Mbits/sec. My DSL yields 6Mbits/sec down. Sure, T1 is symmetric, but the 33% boost over my 750Kbits/sec uplink isn't worth the reduced downlink speed. My DSL has been rock solid (no outages) since 2004.

re:
https://www.garlic.com/~lynn/2011f.html#83 program coding pads

I frequently see mbyte/sec download ... although 600kbyte/sec is more normal (but it is hard to see whether it is constrained at my end ... or the server is hitting load) ... all for $33/month ... compared to $1200/month for 150kbyte/sec download. T1 works out to little less than $1000/month/100kbyte ... compared to cable at $5/month/100kbyte.

--
virtualization experience starting Jan1968, online at home since Mar1970

SV: USS vs USS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: SV: USS vs USS
Newsgroups: bit.listserv.ibm-main
Date: 4 May 2011 15:01:28 -0700
mike.a.schwab@GMAIL.COM (Mike Schwab) writes:

https://en.wikipedia.org/wiki/IBM_AIX
IBM wrote TSS/370 in 1980 then VM/IX then AIX/370 in 1988 then AIX/ESA until 1999 when it merged into MVS/ESA Open Edition.


tss/360 was done in the 60s (official system for 360/67) ... was decommuted and lived on as small special project. some of the single-level-store (paged-mapped filesystem) ideas were picked up for (failed) future system effort ... misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

folklore is that after demise of future system, some of the participants retreated to rochester and did s/38 ... which then morphs into as/400

in the 80s, tss/370 got something of a new life ... as base for special bid mainframe unix for AT&T ... stripped down tss/370 kernel (SSUP) with AT&T doing unix interfaces to the SSUP kernel interface (in some sense this is somewhat analogous to USS for MVS). this was competing with Amdahl's GOLD/UTS unix internally inside AT&T.

AIX/370 (in conjunction with AIX/386) was done by palo alto group using the unix-like LOCUS done at UCLA. This was similar but different from the unix-like MACH done at CMU ... which was used by a number of vendors including NeXT and morphs into current Apple operating system after Jobs returns to Apple. AIX/370 morphs into AIX/ESA.

The "argument" for (Amdahl) UTS under vm370, aix/370 under vm370, tss/370 ssup, and vm/ix (on vm370) was that the cost to add mainframe RAS&erep to unix was several times larger than the base, direct, straight-forward unix port (running under vm370 &/or tss/370 leveraged the already existing ras&erep support w/o having to re-implement directly in unix). This was aggravated by field service stand that it wouldn't service/support machines that lacked mainframe RAS&erep.

I ran internal advanced technology conference in '82 ... and some of the presentation were about VM/IX implementation ... old post reference:
https://www.garlic.com/~lynn/96.html#4a

Palo Alto group had also been working with Berkeley to port their unix-like BSD to mainframe ... but they got redirected instead doing a PC/RT port ... released from ACIS as "AOS" ... as an alternative UNIX to the "official" AIXV2.

The wiki page says much of the AIX v2 kernel was written in PL/I. The issue was that the original "displaywriter" was based on ROMP, cp.r, and PL.8 (sort of pli subset). Redirected to the unix workstation market required unix&C (all being done by the company that had done pc/ix and had been involved in vm/ix). For the internal people, a project called VRM was devised ... a sort of abstract virtual machine layer ... to be done by the internal employees trained in pl.8. The claim was that the combination VRM plus unix port to VRM ... could be done in shorter time and less resources than unix port directly to ROMP hardware. The exact opposite was shown when the palo alto group did the BSD port direct to ROMP hardware (for "AOS"). VRM+unix drastically increased original/total development costs, life-cycle support costs and complicated things like new device drivers (since both non-standard unix/c device driver to VRM interface as well as VRM/pl.8 device driver had to be developed & supported). misc. past posts mentioning 801, romp, rios, pc/rt, aixv2, aixv3, power, rs/6000, etc
https://www.garlic.com/~lynn/subtopic.html#801
misc. old email mentioning 801
https://www.garlic.com/~lynn/lhwemail.html#801

Besides various other issues, the AIX wiki page skips over the whole generation of OSF
https://en.wikipedia.org/wiki/Open_Software_Foundation
and the "unix wars"
https://en.wikipedia.org/wiki/UNIX_wars

Project Monterey
https://en.wikipedia.org/wiki/Project_Monterey

skips over the whole cluster scale-up after IBM bought Sequent and support for Sequent's 256-way SCI-based Numa-Q. Recent posts in (linkedin) "Greater IBM" (current & former IBMer) discussion
https://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past

the sequent wiki ... mentioned in the above post ... used to be somewhat more caustic about sequent being dropped shortly after the sponsoring executive retired:
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

as noted in the "Greater IBM" post ... at one time, IBM had been providing quite a bit of funding for Chen's Supercomputer ... Sequent later acquires Chen Supercomputer and Chen becomes CTO at Sequent ... we do some consulting for Chen (before Sequent purchase by IBM).

Part of the speculation for IBM's purchase of Sequent was that Sequent was major platform for some of the IBM mainframe simulator products.

much of the "posix" (aka unix) support in MVS during the first half of the 90s was sponsored by the head of the disk division software group. in the late 80s, a senior disk engineer got a talk scheduled at the internal, annual, world-wide communication group conference ... and opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division (because the strangle-hold that the communication group had on datacenters). Large amounts of data was fleeing datacenters to more distributed computing friendly platforms. The disk division had attempted to come out with traditional products to address the problem ... but they were constantly blocked by the communication group. As a result, there were doing all sorts of things "outside-the-box" to try and work around the communication group's roadblocks. the head of the disk division software group would periodically ask us to consult on some of the efforts.

for other drift, recent thread in comp.arch about tss/360 supported "position independent code" (i.e. possible to directly map a disk image into virtual memory at any arbitrary virtual address w/o having to perform any link-edit modifications to the contents of that image) ... and the horrendous problems attempting to do anything similar using anything from the os/360 genre:
https://www.garlic.com/~lynn/2011f.html#76 PIC code, RISC versus CISC
also referenced:
https://www.garlic.com/~lynn/2011f.html#79 DCSS ... when shared segments were implemented in VM

misc. past posts mentioning (tss/370) ssup:
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005b.html#13 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#61 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005s.html#34 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2006f.html#26 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2006g.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#30 Old Hashing Routine
https://www.garlic.com/~lynn/2006p.html#22 Admired designs / designs to study
https://www.garlic.com/~lynn/2006t.html#17 old Gold/UTS reference
https://www.garlic.com/~lynn/2007.html#38 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007b.html#3 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007k.html#43 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
https://www.garlic.com/~lynn/2008e.html#1 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2008e.html#49 Any benefit to programming a RISC processor by hand?
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2010c.html#43 PC history, was search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#72 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010i.html#28 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL

--
virtualization experience starting Jan1968, online at home since Mar1970

Bank email archives thrown open in financial crash report

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 May, 2011
Subject: Bank email archives thrown open in financial crash report
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2011f.html#66 Bank email archives thrown open in financial crash report
https://www.garlic.com/~lynn/2011f.html#82 Bank email archives thrown open in financial crash report

also in the area of transparency and visibility: The Dodd-Frank Act will demand data transparency:
http://searchdatamanagement.techtarget.com/news/2240035386/The-Dodd-Frank-Act-could-mean-a-data-management-mess-for-some

one of the comments in the above ... is that for some institutions the change for transparency is going to make Y2K look like a walk-in-the-park ... some rhetoric that is effectively an onerous penalty for these institutions (akin to comments about how onerous the sarbanes-oxley audits were suppose to be ... but it turns out it was more like a reward for the audit industry for having done enron ... since SEC didn't do anything ... apparently prompting GAO to do reports finding uptick in public company fraudulent financial fillings). with regard to Y2K ... there is folklore that one of the too-big-to-fail institutions had outsourced to the lowest bidder ... later finding it was a front for a ethnic organized crime operation ... and the code containing all sorts of clandestine wire transfers

--
virtualization experience starting Jan1968, online at home since Mar1970

Gee... I wonder if I qualify for "old geek"?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 6 May, 2011
Subject: Gee... I wonder if I qualify for "old geek"?
Blog: Old Geek Registry
re:
http://lnkd.in/XRx4Ec

They let me wander around the san jose plant site ... even though I was in bldg. 28. One of the things I noticed was that the development dasd "testcells" were running scheduled "stand-alone" testing on various mainframes. They had tried putting up MVS on the mainframes to allow multiple, concurrent testing, on-demand testing ... but found MVS had 15min MTBF in that environment (system hung and/or failed requiring reboot).

I offered to rewrite I/O supervisor to make it bullet proof and never fail ... so they could have on-demand, concurrent testing at anytime, significantly improving productivity. A side effect was that the mainframes in bldg. 14&15 became available for other uses also ... since the testcell testing consumed very little of processor resources.

Now, bldg. 28 had a 370/195 running under MVT ... but even high priority work could have several week turn-around. One of the applications was "air-bearing simulation" for design of disk "floating heads". Bldg. 15 got one of the first engineering 3033s for disk testing. The 370/195 had peak thruput around 10mips ... but normal codes ran closer to 5mips. By comparison the 3033 ran around 4.5mips (basically 1.5 times 168-3). All the disk testing consumed a percent or two of the 3033 ... so there was lots of idle cycles to do other things. One of the things setup was to get the air-bearing simulation that ran on 370/195 MVT system (with several week turn-around) executing on the 3033 in bldg. 15 (where they could possibly get several turn-arounds a day).

Later, after STL was getting production 3033s ... there was investigation about also being able to run the air-bearing simulation on their offshift (effectively idle) 3033s ... old email from long ago and far away (although stl datacenter had traditional "billing" program which had to be handled):

Date: 05/29/80 17:28:21
To: wheeler

xxxxxx says he sees no reason why the air bearing problem couldn't run on third shift STLVM1 or 3 ... both 3033s and both now up on third shift according to him. If true, guess that leaves the settling on the bill....


... snip ... top of post, old email index

The bldg. 14&15 mainframe use was totally off any datacenter "accounting" grid ... since they were theoretically "stand-alone" testing machines. misc. past posts mentioning getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

Date: 06/04/80 18:06:51
From: wheeler

I've been asked to look at the Air Bearing program tomorrow to see if I can identify anthing for performance improvements.


... snip ... top of post, old email index

Date: 06/05/80 07:53:36
From: wheeler

re: air bearing; Checking pli manual, ENV(TOTAL) can't be used with VBS records, almost anything else but VBS. I forget, but what is O/S close option/equivalent to CMS TCLOSE, i.e. checkpoint file, but leave it open'ed. The only other two that come to mind 'quickly' is REORDER in PLI, and specifying FORTRAN and redo some of the program as FORTQ subroutine.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

Court OKs Firing of Boeing Computer-Security Whistleblowers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 May, 2011
Subject: Court OKs Firing of Boeing Computer-Security Whistleblowers
Blog: Financial Crime Risk, Fraud and Security
Court OKs Firing of Boeing Computer-Security Whistleblowers
http://www.wired.com/threatlevel/2011/05/whistleblower-firings/

from above:
Two Boeing auditors were legally fired after they exposed to the press internal documents suggesting the aerospace and military contractor lacked computer-security safeguards, a federal appeals court ruled Tuesday.

... snip ...

also

"Members of the media are not included."
https://financialcryptography.com/mt/archives/001313.html

In the past I had commented that possibly the only significant part of SOX was about informants. In the congressional Madoff hearings, the person that had unsuccessfully tried for a decade to get SEC to do something about Madoff, commented that tips turn up 13 times more fraud than audits (and SEC didn't have a tip line ... but did have 1-800 line to complain about audits)

misc. past posts mentioning tips & audits:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011e.html#56 In your opinon, what is the highest risk of financial fraud for a corporation ?
https://www.garlic.com/~lynn/2011f.html#62 Mixing Auth and Non-Auth Modules

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Fri, 06 May 2011 13:06:11 -0400
3yrs ago, I got a deskside database server machine, latest 4core @3ghz, 8gbyte memory, a couple tbyte disks, etc. this week, i got a deskside database server machine, latest 4core @3.4ghz (also has hyperthreading), 16gbyte memory, a couple tbyte disks, etc. New machine is less than 1/3rd the price of the old machine.

Was at the dentist yesterday who said he recently had his office PCs (few yrs old, "professional XP") looked at and they told him that PCs had become disposable technology ... less expensive to replace all the old machines rather than try and do upgrade.

--
virtualization experience starting Jan1968, online at home since Mar1970

CFTC Limits on Commodity Speculation May Wait Until Early 2012

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 May, 2011
Subject: CFTC Limits on Commodity Speculation May Wait Until Early 2012
Blog: Financial Crime Risk, Fraud and Security
CFTC Limits on Commodity Speculation May Wait Until Early 2012
http://www.bloomberg.com/news/2011-01-18/cftc-limits-on-commodity-speculation-may-wait-until-early-2012.html

from above:
The Commodity Futures Trading Commission may not complete limits on commodity speculation until the first quarter of next year, according to a filing on the agency;s website.

... snip ...

"Griftopia" covered that the previous spike in oil prices over $100 was result of "19" secret letters allowing entities w/o substantial positions to play. Previously there was rule that excluded entities w/o substantial positions because (their speculation) resulted in wild, irrational price swings. Today, TV business news had quite a bit on commodity speculation wasn't something for individuals to play in because of the wild, irrational price swings. There is temptation to draw analogy with pump&dump tactics.

misc. past posts/reference to griftopia &/or secret letters
https://www.garlic.com/~lynn/2010o.html#59 They always think we don't understand
https://www.garlic.com/~lynn/2010p.html#6 What banking is. (Essential for predicting the end of finance as we know it.)
https://www.garlic.com/~lynn/2010p.html#7 What banking is. (Essential for predicting the end of finance as we know it.)
https://www.garlic.com/~lynn/2010p.html#50 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010p.html#57 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
https://www.garlic.com/~lynn/2011d.html#21 The first personal computer (PC)

--
virtualization experience starting Jan1968, online at home since Mar1970




previous, next, index - home