List of Archived Posts

2008 Newsgroup Postings (04/12 - 05/17)

independent appraisers
subprime write-down sweepstakes
The original telnet specification?
America's Prophet of Fiscal Doom
You won't guess who's the bad guy of ID theft
TRANSLATE inst with DAT on
The Return of Ada
Xephon, are they still in business?
Xephon, are they still in business?
Using Military Philosophy to Drive High Value Sales
3277 terminals and emulators
What would be a future of technical blogs ? I am wondering what kind of services readers except to get from a technical blog in next 10 years
The Return of Ada
independent appraisers
How fast is XCF
Two views of Microkernels (Re: Kernels
The Return of Ada
handling the SPAM on this group
The Return of Ada
IT full of 'ducks'? Declare open season
The Return of Ada
handling the SPAM on this group
To the horror of some in the Air Force
Toyota takes 1Q world sales lead from General Motors
IBM's Webbie World
The Return of Ada
Two views of Microkernels (Re: Kernels
The Return of Ada
Two views of Microkernels (Re: Kernels
subprime write-down sweepstakes
DB2 & z/OS Dissertation Research
Stanford University Network (SUN) 3M workstation
VTAM R.I.P. -- SNATAM anyone?
subprime write-down sweepstakes
Two views of Microkernels (Re: Kernels
Two views of Microkernels (Re: Kernels
Two views of Microkernels (Re: Kernels
Two views of Microkernels (Re: Kernels
Fixed-Point and Scientific Notation
Boyd again
IT vet Gordon Bell talks about the most influential computers
3277 terminals and emulators
IT vet Gordon Bell talks about the most influential computers
The Return of Ada
handling the SPAM on this group
Two views of Microkernels (Re: Kernels
How can companies decrease power consumption of their IT infrastructure?
Whitehouse Emails Were Lost Due to "Upgrade"
Microsoft versus Digital Equipment Corporation
subprime write-down sweepstakes
subprime write-down sweepstakes
Microsoft versus Digital Equipment Corporation
subprime write-down sweepstakes
Microsoft versus Digital Equipment Corporation
Why 'pop' and not 'pull' the complementary action to 'push' for a stack
Microsoft versus Digital Equipment Corporation
independent appraisers
Long running Batch programs keep IMS databases offline
our Barb: WWII
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Two views of Microkernels (Re: Kernels
Up, Up, ... and Gone?
Microsoft versus Digital Equipment Corporation
how can a hierarchical mindset really ficilitate inclusive and empowered organization
New test attempt
Is a military model of leadership adequate to any company, as far as it based most on authority and discipline?
Microsoft versus Digital Equipment Corporation
New test attempt
New test attempt
New test attempt
New test attempt
Mainframe programming vs the Web
SSL certificates - from a customer's point of view (trust)
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
New test attempt
Java; a POX
Microsoft versus Digital Equipment Corporation
Mainframe programming vs the Web
What mode of payment you could think of with the advent of time?
New test attempt
Annoying Processor Pricing
Credit Crisis Timeline
subprime write-down sweepstakes
Microsoft versus Digital Equipment Corporation
Annoying Processor Pricing
Annoying Processor Pricing
Microsoft versus Digital Equipment Corporation
Old hardware
Old hardware
Is virtualization diminishing the importance of OS?

independent appraisers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: independent appraisers
Newsgroups: alt.folklore.computers
Date: Sat, 12 Apr 2008 19:44:45
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Bank Write-Downs: No End Yet
http://www.time.com/time/business/article/0,8599,1727462,00.html

from above:
UBS, of course, is hardly alone. It may be the current banking leader of the write-down scorecard, but the implosion of the U.S. subprime mortgage market and general deflating of home prices has hit Wall Street all around. Merrill Lynch, which ousted CEO Stan O'Neal in October, has written down some $25 billion worth of assets. Citigroup, which booted CEO Chuck Prince in November, is approaching $24 billion. On April 1, the same day as UBS, Deutsche Bank declared another $4 billion write-down. Across the board, banks are out some $200 billion since the beginning of 2007.


... snip ...

... also:
Then there is the Federal Reserve, which has started lending directly to investment banks (which have very happily borrowed) in order to instill confidence in the system. And if the federal government will take mortgage-backed paper as collateral, how bad could it be, really?

The answer, it seems, is worse.


... snip ...

re:
https://www.garlic.com/~lynn/2008g.html#65 independent appraisers
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers

Central Bankers Say Crisis Not Over, Urge Regulation
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=aXL58O.8xf1M&refer=home

from above:
Capital markets have seized up in the aftermath of $245 billion in asset writedowns and credit losses tied to the collapse of the U.S. subprime mortgage market. Finance ministers and central bankers from the Group of Seven nations yesterday endorsed a series of proposals from the Financial Stability Forum including a 100-day action plan to strengthen market regulation.

... snip ...

G-7 Signals Concern on Dollar's Slide, Weaker Growth
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=a7Yh8jULL1W8&refer=worldwide

from above:
The officials met after the International Monetary Fund this week estimated a 25 percent chance of a global recession this year. A collapse in the market for U.S. subprime mortgages has pushed the U.S. toward its first contraction in seven years and prompted banks to shun lending after $245 billion of asset writedowns and credit losses since the start of 2007.

... snip ...

and ...
https://www.garlic.com/~lynn/2008g.html#57 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#62 Credit crisis could cost nearly $1 trillion, IMF predicts

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Sun, 13 Apr 2008 10:11:53
Anne & Lynn Wheeler <lynn@garlic.com> writes:
recent posts mentioning business tv shows ridiculing both UBS and Citigroup
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?


re:
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#0 independent appraisers

Citigroup, Merrill May Post $15 Billion Writedowns, Times Says
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=a14SC3UVha.4&refer=home

from above:
Citigroup will have $10 billion of writedowns, taking its first-quarter loss to about $3 billion, the newspaper said. Some analysts say the Citigroup writedowns may stretch to $12 billion, it said. Merrill may have a $5 billion writedown, taking it to a $2.7 billion loss, the report said.

... snip ...

which will put Citigroup ahead ($35b?) in the write-down sweepstakes

U.S., Europe Warn of Further 'Bad News'; Strengthen Regulation
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=ac5LB3Jb7nHk&refer=home

from above:
The collapse of the U.S. subprime-mortgage market led to a seizing up in capital markets and has triggered $245 billion in asset writedowns and losses since the start of 2007. Finance ministers and central bankers from the Group of Seven are trying to strengthen market regulation and want banks to speed disclosure of losses and improve the way they value assets.

... snip ...

decade old post mentioning S&L crisis, issues with valuation of mortgage-backed securities, & citibank, two decades ago, needing infusion of private equity to stay afloat (because of problems with variable rate mortgages)
https://www.garlic.com/~lynn/aepay3.htm#riskm

past posts mentioning toxic CDOs designed to obfuscate value of subprime mortgages and other credit-backed instruments.
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#2 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#4 CDOs subverting Boyd's OODA-loop
https://www.garlic.com/~lynn/2008g.html#16 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#57 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers

other posts referencing repeal of Glass-Steagall allowing unregulated investment banking activities to contaminate safety&soundness of regulated banking
https://www.garlic.com/~lynn/2008b.html#12 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008e.html#42 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#59 independent appraisers
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#13 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#46 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#73 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#75 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#79 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#94 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#96 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#97 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The original telnet specification?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The original telnet specification?
Newsgroups: comp.protocols.tcp-ip
Date: Sun, 13 Apr 2008 16:07:08
Andrew Smallshaw <andrews@sdf.lonestar.org> writes:
The earliest specification is probably RFC 318/NIC 9348 but as that notes it is simply a description of the existing protocol which up until that point had not been officially documented. More recent RFCs build on it, in paricular RFC 854, but that is in itself not complete.


97 First cut at a proposed Telnet Protocol, Melvin J., Watson R., 1971/02/15 (10pp) (.pdf=403375) (Ref'ed By 3675, 5198)

my rfc index:
https://www.garlic.com/~lynn/rfcietff.htm

in the RFCs listed by section, click on Term (term->RFC#) and then scroll down to "telnet"

the "oldest" listed is:
15
Network subsystem for time sharing hosts, Carr C., 1969/09/25 (8pp) (.txt=10807)


as always, clicking on the ".txt=nnn" (or ".pdf=.nnn") field retrieves that actual rfc. from above:
In addition to user program access, a convenient means for direct network access from the terminal is desirable. A sub-system called "Telnet" is proposed which is a shell program around the network system primitives, allowing a teletype or similar terminal at a remote host to function as a teletype at the serving host.

... snip ...

as noted, RFC97 is now referenced by RFC3675 and RFC5198 (when I generate summaries, I'm now doing both forward & backward refs). RFC5198 has a Appendix A. History & Context

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

America's Prophet of Fiscal Doom

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: America's Prophet of Fiscal Doom
Newsgroups: alt.folklore.computers
Date: Mon, 14 Apr 2008 06:41:44
interview with (US Federal) comptroller general (that recently stepped down)

America's Prophet of Fiscal Doom
http://www.usnews.com/articles/business/economy/2008/04/11/americas-prophet-of-fiscal-doom.html

from above:
Second, in the current subprime situation, there was a lack of adequate transparency as to the magnitude of these transactions and the nature of the risk.... You have the exact same thing with regard to the federal government's off-balance-sheet obligations. The problem is not current deficits and debt levels. The problem is where we're headed in the $44 trillion-plus in unfunded obligations for Social Security and Medicare that's growing $2 trillion plus a year.... Cash is key. We are already negative cash flow for Medicare. We're going to go negative cash flow for Social Security within the next 10 years...though Social Security is not the real problem. It's healthcare that's going to bankrupt the country.

... snip ...

decade old post also mentioning off-balance-sheet obligations
https://www.garlic.com/~lynn/aepay3.htm#riskm

some related x-over response to this thread
https://www.garlic.com/~lynn/2008f.html#99 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#5 The Workplace War for Age and Talent

in a different discussion group:
> One of those unknowns is the incipient collapse of the health care > system for baby boomers. The number of geriatric physicians being > trained is currently decreasing (at least in my State), and based on > the experience my family just had with a long illness, it's not > going to be pretty.

whether baby boomers have much longer life-span ... and effectively spend a lot longer as retirees and therefor drives up the avg. number of retirees ... is somewhat 2nd order effects.

1st order effects is that baby boomers represent a large population bubble. As workers ... they provided a large revenue base to support the much smaller prior generation of retirees. Given the existing financial funding for retirees ... they represent an enormous drain on the following smaller generation.

There are some numbers that for baby boomer 50 or older ... that the avg. life expectancy for both men and women is now 78 ... up from 72 for prior generation. That contributes to avg. number of retirees ... over and above their absolute numbers. Some of the projected medical expenses have to do with extending past 78.

This is coming at a time that country is feeling increasing effects of global competition. There has been all sorts of quibbling about numbers showing decline in education & skill level over the last 40-50 yrs. From the standpoint of current global competitiveness ... those statistics can be completely ignored ... and just look at the country's education level currently ranking 29 out of 30 industrial countries (unrelated to whether or not SAT scores have risen or fallen over the last 50yrs).

Back to the original article on the size of worker base ... if the overall number of workers is being cut nearly in half (compared to the big baby boom worker bubble) ... then it can be expected that on the avg. all categories of workers are going to see a decline of 50percent ... which would extend also to geriatric physicians.

There have been articles about the retiring baby boomers starting to affect nearly all economic areas. One article was that oil field development projects take an avg. of seven years and the number of such projects are about 50 percent of what might be expected ... directly attributed to expected retirement of baby boomers and not having enough experienced workers to finish a larger number of such projects.

During congressional hearings on H1B visas ... one of the congressmen raised the question of whether or not there be a educational level requirement placed on general immigrants (the person given testimony responded that it was totally outside the issue of H1B visas ... the numbers which aren't even a tiny blip on the total number of immigrants).


... snip ...

comptroller general was appointed in the 90s for 15yr term, he stepped down in jan. ... past posts mentioning comptroller general (some quotes about nobody in congress for the last 50 yrs has been capable of middle school arithmetic)
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006o.html#61 Health Care
https://www.garlic.com/~lynn/2006p.html#17 Health Care
https://www.garlic.com/~lynn/2006r.html#0 Cray-1 Anniversary Event - September 21st
https://www.garlic.com/~lynn/2006t.html#26 Universal constants
https://www.garlic.com/~lynn/2007j.html#20 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#91 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#19 Another "migration" from the mainframe
https://www.garlic.com/~lynn/2007o.html#74 Horrid thought about Politics, President Bush, and Democrats
https://www.garlic.com/~lynn/2007p.html#22 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007q.html#7 what does xp do when system is copying
https://www.garlic.com/~lynn/2007s.html#1 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007t.html#13 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#14 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#15 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#24 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007t.html#25 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#33 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#35 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#26 2007 Year in Review on Mainframes - Interesting
https://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008d.html#40 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008f.html#86 Banks failing to manage IT risk - study

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

You won't guess who's the bad guy of ID theft

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: You won't guess who's the bad guy of ID theft
Newsgroups: alt.folklore.computers
Date: Mon, 14 Apr 2008 07:06:59
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the other metaphor that we've used is "naked transactions" ... lots of related posts
https://www.garlic.com/~lynn/subintegrity.html#payments


re:
https://www.garlic.com/~lynn/2008g.html#17 Hannaford breach illustrates dangerous compliance mentality

You won't guess who's the bad guy of ID theft
http://news.yahoo.com/s/usatoday/20080414/tc_usatoday/youwontguesswhosthebadguyofidtheft You won't guess who's the bad guy of ID theft
http://www.usatoday.com/money/books/reviews/2008-04-13-zero-day-threat_N.htm

from above:
Despite the currency of the subject, nobody has written a book about identity theft quite the way Byron Acohido and Jon Swartz have done. Both technology reporters for USA TODAY, Acohido and Swartz have ferreted out scandal within the identity-theft realm that is bound to lead to reader outrage. Whether the revelations will lead to meaningful reform by Congress and federal regulatory agencies remains to be seen.

... snip ...

this is somewhat related to "naked transaction" metaphor threads
https://www.garlic.com/~lynn/subintegrity.html#payments

in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. after some detailed, end-to-end vulnerability and threat analysis, the x9.59 financial standard was generated
https://www.garlic.com/~lynn/x959.html#x959

one of the features of x9.59 financial standard was that rather than attempting to prevent the large variety and myriad number of data breaches & security breaches ... it instead focused on nullifying the threat of such breaches (i.e. making the information obtained from such breaches useless to the attackers for account fraud transactions).

other recent posts:
https://www.garlic.com/~lynn/2008g.html#8 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#11 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#27 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#28 Hannaford case exposes holes in law, some say

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

TRANSLATE inst with DAT on

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TRANSLATE inst with DAT on
Newsgroups: bit.listserv.ibm-main
Date: Mon, 14 Apr 2008 09:34:39
steve writes:
The Prin of Operations, programming notes on using the TR with DAT on, state that there will be a performance hit if the second operand actually crosses the 4096 line. This is because it will do a 'mock' execution first.

Assuming DAT on, is the performance hit related to the possibility that the following 4096 page is not in virtual memory?


way back on 360/67 ... (actually all 360s) TR used to test start & start+255 (end) address of the table ... which met that if it crossed a 4k page ... it would catch both ... aka page fault both pages ... before starting instruction execution.

somewhere along the way ... something was raised that TR only uses that much of the table that the input data-stream might used ... for instance, if the translation input stream only had values 0-9 ... and the table was within 256 bytes of the end of an addressable region ... then the instruction might fail (with start+256 precheck) ... even tho it otherwise could successfully execute. so the TR instruction was "fixed" ... if the table start is within 256 bytes of the end of an addressable boundary... it "pre-executes" the instruction to see if any input stream bytes would index the table across the boundary.

this would also theoretically have been a problem with 2k key fetch protect ... and the table was within 256 bytes of a 2k boundary (with the next 2k, fetch protected) and the input data stream never indexed anything (in the table) across the addressable boundary.

a past thread that also got into this subject:
https://www.garlic.com/~lynn/2005j.html#36 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#37 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#40 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#44 A second look at memory access alignment

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Tue, 15 Apr 2008 20:06:27
The return of Ada
http://www.gcn.com/print/27_8/46116-1.html

mentioned in the above:

En Route Atomation Modernization
http://www.faa.gov/airports_airtraffic/technology/eram/

2001 GAO report on ATC modernization
http://www.gao.gov/cgi-bin/getrpt?GAO-01-725T

from above:
ATC Modernization Is an Ambitious Undertaking

ATC modernization, which was announced in 1981 as a 10-year, $12 billion program, has expanded and is now expected to cost more than $44 billion through fiscal year 2005. Of this amount, the Congress appropriated over $32 billion for fiscal years 1982 through 2001. The agency expects that approximately $12 billion will be provided for fiscal years 2002 through 2005.


... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Xephon, are they still in business?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xephon, are they still in business?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 15 Apr 2008 22:39:15
pcs305@GMAIL.COM (Ian) writes:
I think its time for us(old mainframers) to jump on the "new " age technologies like blogging, forums and wiki's to preserve our knowledge and pass it on to the next mainframe generation.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

developed cp40 for a 360/40 with special modified hardware that supported virtual memory. cp40 morphed into cp67 when 360/67 with standard virtual memory support became available. 3 people came out from the science center to the univ. to install it the last week in jan68. It was "officially" announced at the spring 68 SHARE meeting in houston.

besides traditional customer dataprocessing installations ... there were some number of commercial online timesharing services built on cp67 and the later vm370 available on 370s
https://www.garlic.com/~lynn/submain.html#timeshare

one of these services providing commercial online timesharing services with vm370 was Tymshare. Tymshare opened a version of their online conferencing system to SHARE as VMSHARE in aug76. Archives are here:
http://vm.marist.edu/~vmshare/

The science center was also responsible for the networking technology used for majority of the internal network ... which was larger than the internet/arpanet from just about the beginning until approx. mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

various old email mentioning the internal network
https://www.garlic.com/~lynn/lhwemail.html#vnet

The same technology was used for the educational bitnet (& earn in europe) ... was in the early 80s was approx. the same size as arpanet/internet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

One of the largest (virtual machine) online commercial timesharing services was the internal HONE system.
https://www.garlic.com/~lynn/subtopic.html#hone

initially after the 23jun69 unbundling announcement
https://www.garlic.com/~lynn/submain.html#unbundle

there was concern that new system engineers had lost much of their learning avenue. prior to unbundling announcements, new system engineers gained much of their experience somewhat as apprentice as part of vendor teams onsite at customer locations. after unbundling, system engineering time at customer locations was charged for ... and charging for "apprentice" system engineers wasn't justified.

HONE (Hands-On Network Environment) systems were initially setup for branch office system engineers to gain experience using operating systems running in (initiall cp67) virtual machines.

The science center had also ported apl\360 to cp67 for cms\apl ... and a lot of cms\apl tools were developed. Internally there were a large number of sales and marketing tools developed and were also starting to be deployed on HONE systems. Eventually this use came to dominate all HONE activity ... and running guest operating systems in virtual machines pretty much disappeared. Eventually customers orders couldn't even be processed w/o first having been processed by HONE applications ... and HONE systems were replicated around the world.

From very early HONE days, until approx. the mid-80s ... i provided highly modified cp67 kernels ... and later vm370 kernels for numerous internal locations ... including HONE operations. some old email mentioning transition from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

I also did some amount of early computer conferencing on the internal network as well as working with external customers ... including Tymshare. At one point, a procedure was established where i would obtain monthly copies from tymshare of all the vmshare information ... which i would make available internally ... some old email mentioning vmshare
https://www.garlic.com/~lynn/lhwemail.html#vmshare

including making copies available on hone systems ... some old email mentioning HONE
https://www.garlic.com/~lynn/lhwemail.html#hone

for other topic drift ... recent post mentioning internal computer conferencing like activity from over 25yrs ago
https://www.garlic.com/~lynn/2008g.html#47 My last post in this forum

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Xephon, are they still in business?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xephon, are they still in business?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 15 Apr 2008 23:57:56
Tom.Schmidt@OASSOFTWARE.COM (Tom Schmidt) writes:
The mainframe community also supported city, area and regional user groups for quite a few of its subcomponents for many, many years -- up until the advent of the internet, which brought communication without travel requirements.

re:
https://www.garlic.com/~lynn/2008h.html#7 Xephon, are they still in business?

70s use of the internal network (mostly vm370) included rexx evolution

REXX Symposium, May 1995
http://www.rexxla.org/Symposium/1995/report.html

... from above:
Mike discussed his ideas for a new scripting language with colleagues at Hursley and with other IBMers over IBM's VNET network, which then had 300 nodes in Europe and North America. He sent out the first language specification and began incorporating the feedback. He typically wrote and circulated the documentation for each new feature to get feedback on the desirability of the new function before doing the implementation. He also typically first wrote a few programs to exercise the new feature and see whether it was right.

The first implementation was distributed via VNET on May 21, 1979. "From then on, the good ideas came from the users." For example, David N. Smith, the father of VMSHARE, insisted upon being able to nest comments


... snip ...

at the time of the arpanet/internet great switch-over to tcp/ip on 1jan83, depending on how counted, there were something between 100 and 250 nodes ... old post with reference
https://www.garlic.com/~lynn/2006r.html#7 Was FORTRAN buggy?

by comparison, in 1983, the internal network exceeded 1000 nodes (again mostly vm370 machines) ... prepping for the announcement
https://www.garlic.com/~lynn/2006k.html#email830422
in this post
https://www.garlic.com/~lynn/2006k.html#43 Arpa address

the actual announcement included in this post
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Using Military Philosophy to Drive High Value Sales

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Using Military Philosophy to Drive High Value Sales
Date: 16 Apr 2008, 10:20 am
Blog: The Greater IBM Connection
A much more widely used philosophy is Boyd's OODA-loop. I sponsored Boyd's all day briefing a number of times at IBM in the early 80s. You are now starting to find Boyd's OODA-loop start to show up in large number of different areas ... including marketing and sales ... especially in competitive situations. Part of OODA-loop metaphor is being highly agile.

Misc. URLs from around the web mentioning Boyd and/or OODA-loop
https://www.garlic.com/~lynn/subboyd.html#boyd2

Boyd used several Guderian examples ... one was prior to the blitzgrieg .. verbal orders only. This is related to the definition of auditors being those that go around the battlefield after the war, stabbing the wounded (i.e. don't have to worry about the monday afternoon guarterbacks 2nd guessing what is going on ... man on the spot free to make the decision w/o having to worry about the auditors).

About the time, I was sponsoring Boyd's briefings in IBM ... we had an "auditor" incident at (IBM) san jose research. We had done the driver for using 6670s as computer output and they were deployed at various departmental rooms around bldg. 28. Part of the driver was printing a cover/separator page that was different color. Since there was a lot of blank space ... added to the driver were random quotations (taken from a large quotation files). One of the quotations was the above mentioned definition of auditors. So what do the corporate hdqtrs audit people find on top of a departmental 6670 ... was output with cover page giving the definition of auditors. They complained ... apparently believing somebody had done it on purpose.

Totally unrelated ... for a time when on assignment to the austin group ... where we started the ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

lived next door to Guderian's nephew for a time ... who was a retired us air force col. when he put on german ww2 officer's uniform ... he looked exactly like his uncle.

Boyd had started out with single briefing that was titled Patterns of Conflict ... that was nearly all day. During the period that I was sponsoring him, he added a shorter 2nd briefing titled Organic Design for Command and Control (doing both briefings made for a long day). Organic Design for Command and Control used an example that US corporations were starting (early 80s) to feel the effects of army officer training from WW2. The problem going into the war was mobilizing a very large number of men with little or no experience. This led to falling back to a very rigid, top-down command & control structure to leverage the little experience that was available. Then during the late 70s and early 80s, this officers were starting to come of age in corporate management and falling back on their WW2 training of extremely rigid, top/down command & control structure.

lots of (my) posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

earlier we took enormous barbs from the rest of the organization (mostly SAA and t/r crowd) when we had come up with 3-tier architecture (now most frequently referred to as middleware) and were out calling on senior customer executives.
https://www.garlic.com/~lynn/subnetwork.html#3tier

the 3-tier architecture effort somewhat then morphed into our ha/cmp activity and we were doing worldwide marketing tours for our ha/cmp product

we did European tours with nearly a different city each day ... and each of us (two) four (different) sales calls a day. of course we had to keep a somewhat low profile below the mainframe radar. even at that we were eventually told we had to stop work on anything with more than four processors. account of vendor meeting just before getting restrictions applied
https://www.garlic.com/~lynn/95.html#13

ha/cmp could sort of be considered loosely-coupled for the rs/6000 product line ... and follow-on to what my wife had been doing when she had been con'ed into going to POK to be in charge of (mainframe) loosely-coupled architecture. some past references
https://www.garlic.com/~lynn/submain.html#shareddata

here are two posts drawing the relationship between Boyd's OODA-loop and CDOs subverting "observe" (in OODA-loop).
https://www.garlic.com/~lynn/aadsm28.htm#58
https://www.garlic.com/~lynn/2008g.html#4

CDOs had been used two decades ago during the S&L crisis to obfuscate the underlying values.

This is long-winded decade old post that (also) touches on need to have visibility into underlying values of CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

3277 terminals and emulators

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3277 terminals and emulators
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 19 Apr 2008 09:25:40
mike@CORESTORE.ORG (Michael Ross) writes:
So far, the only leads I have are that the 3270 card in the XT/370 desktop mainframe machine did 3277 emulation - but I don't know if it supported Model 1 mode. Ditto for the 'Appleline' external 3270 box for early Mac & Lisa machines; again I've heard that supported 3277, but don't know about Model 1 specifically.

the signals on the cable change between 3272/3277/ANR and 3274/3278/DCA (although 3274 supported the attachment of 3277)

part of the difference was reducing the manufacturing costs of the terminal, they moved a lot of the electronics that had been in the 3277 "head" back into the controller. there had been some amount of work on modifying 3277 to improve the 3277 human factors ... which were then no longer possible with 3278 (since all the logic was now back in the controller). One of the issues was (because of the fundamental half-duplex) ... if you were typing when the system wrote to the head ... the keyboard would lockup and you needed to hit the reset key. A 3277 keystroke "fifo" was created that would handle the input/output sequencing and hold keystrokes in the buffer to avoid the keyboard lockup. Another was being able to modify the repeat key/delay timing to significantly increase the rate.

another aspect was because so much processing had been moved back into the (3274) controller ... that interactions that were nearly instantaneous on 3272/3277 would be around 1/2 second on 3274/3278 ... making .25 second interactive response impossible .... the jokes at the time was that the data entry applications were fairly insensitive to system response and TSO with minimum of 1second response already never saw the difference.

misc. past posts
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power
https://www.garlic.com/~lynn/2004e.html#0 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007t.html#40 Why isn't OMVS command integrated with ISPF?
https://www.garlic.com/~lynn/2007t.html#42 What do YOU call the # sign?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

What would be a future of technical blogs ? I am wondering what kind of services readers except to get from a technical blog in next 10 years

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: April 20, 2008
Subject: What would be a future of technical blogs ? I am wondering what kind of services readers except to get from a technical blog in next 10 years.
Blog: Database
Browser-based infrastructures are still clunky compared to various non-browser interfaces evolved over the years for usenet.

browser-based infrastructures are quite similar to throw back to various dumb terminal form-oriented infrastructures from the 70s&80s ... before local programming/tailoring was possible.

here is couple posts in (mainframe) thread that appears in listserv (mailing list) and also gatewayed to usenet ... about online technical discussion groups
https://www.garlic.com/~lynn/2008h.html#7
https://www.garlic.com/~lynn/2008h.html#8

mentioning tymshare opening up its computer conferencing interface to the SHARE organization for "VMSHARE" technical discussions ("blog") starting in 1976.
http://vm.marist.edu/~vmshare/

This is semi-related recent post about efforts to improve human factors of dump (3270) terminals
https://www.garlic.com/~lynn/2008h.html#9

what evolved in the 80s were various PC based programming facilities for improving the human factors of the emulated dumb terminal interfaces.

for other drift ... i've pontificated a bit about being able to leverage browser tab support to regularly being able to have a couple hundred tabs open and move around in them .... w/o having to suffer the synchronous delays associated with standard URL clicking
https://www.garlic.com/~lynn/2008b.html#32
https://www.garlic.com/~lynn/2008b.html#35

for some database topic drift ... various past archived posts related to having worked on original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Sun, 20 Apr 2008 16:16:06
Morten Reistad <first@last.name> writes:
This all comes from peak oil. The event has been predicted for 40 years. It is drastic, but not apocalyptic. We are not running out of oil, we just can't produce it faster. And get used to the fact that what we have now IS cheap oil.

old posts about the value of gas is possibly $10-$15/gal (or more) ... and people filling up (environmental) economic niche living off the difference between the value and what they charged. one of the possibilities is that if the difference between the cost and the value is large (aka "cheap") ... there can evolve very profligate, inefficient use (resulting in difficult adjustments if the difference between the cost and the value is narrowed).
https://www.garlic.com/~lynn/2001f.html#4 some VLIW (IA-64) projections from January, 1999...
https://www.garlic.com/~lynn/2002q.html#7 Big Brother -- Re: National IDs
https://www.garlic.com/~lynn/2002q.html#9 Big Brother -- Re: National IDs

also mentioning environmental, economic niches:
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/2004c.html#20 Parallel programming again (Re: Intel announces "CT" aka
https://www.garlic.com/~lynn/2008f.html#65 China overtakes U.S. as top Web market

and recent threads mentioning that oil field development is significantly less than would otherwise be expected ... because so many baby boomers are retiring that there isn't enuf skilled resources around to handle larger number of projects
https://www.garlic.com/~lynn/2007q.html#42 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007s.html#63 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#43 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom

supply & demand scenario ... with large world-wide increase in demand and not a similar significant increase in supply

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

independent appraisers

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: independent appraisers
Newsgroups: alt.folklore.computers
Date: Mon, 21 Apr 2008 08:59:22
Anne & Lynn Wheeler <lynn@garlic.com> writes:
recent posts mentioning business tv shows ridiculing both UBS and Citigroup
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?


re:
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers

ongoing ridiculing UBS and Citigroup

Kurer Pressured to Dismantle House Ospel Built at UBS
http://www.bloomberg.com/apps/news?pid=20601109
http://www.bloomberg.com/apps/news?pid=20601109&sid=aCH93fqMagMw&refer=home

... when they recently replaced the head of UBS with the general counsel ... they commented that Citigroup had already tried that ... and then still had to replace the general counsel (last fall)

somewhat related recent threads ...
https://www.garlic.com/~lynn/aadsm28.htm#61 Is Basel 2 out...Basel 3 in?
https://www.garlic.com/~lynn/aadsm28.htm#63 Is Basel 2 out...Basel 3 in?
https://www.garlic.com/~lynn/aadsm28.htm#65 Would the Basel Committee's announced enhancement of Basel II Framework and other steps have prevented the current global financial crisis had they been implemented years ago?
https://www.garlic.com/~lynn/aadsm28.htm#66 Would the Basel Committee's announced enhancement of Basel II Framework and other steps have prevented the current global financial crisis had they been implemented years ago?
https://www.garlic.com/~lynn/aadsm28.htm#67 Would the Basel Committee's announced enhancement of Basel II Framework and other steps have prevented the current global financial crisis had they been implemented years ago?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

How fast is XCF

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How fast is XCF
Newsgroups: bit.listserv.ibm-main
Date: Mon, 21 Apr 2008 10:20:50
ibm-main@TPG.COM.AU (Shane) writes:
I guess RFC2549 would be no good either then ... ???

one of the april 1st RFCs

from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

click on Term (term->RFC#) in RFCs listed by section and scroll down to "April1"
April1
5242 5241 4824 4042 4041 3751 3514 3252 3251 3093 3092 3091 2795 2551 2550 2549 2325 2324 2323 2322 2321 2100 1927 1926 1925 1924 1776 1607 1606 1605 1437 1313 1217 1149 1097 852 748


clicking on the RFC # (in the index) brings up the RFC summary in the lower frame.
2549
IP over Avian Carriers with Quality of Service, Waitzman D., 1999/04/01 (6pp) (.txt=9519) (Updates 1149) (Refs 1149) (Ref'ed By 3117)


as always ... clicking on the ".txt=nnn" field (in the summary), fetches the actual RFC.

as noted, 2549 references 1149:
1149 E
A Standard for the Transmission of IP Datagrams on Avian Carriers, Waitzman D., 1990/04/01 (2pp) (.txt=3215) (Updated by 2549) (Ref'ed By 1543, 1818, 2321, 2549)


--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 21 Apr 2008 15:09:33
vandys writes:
I am not--at all--a big fan of Microsoft. But giving credit where credit is due, their folks working on the Singularity research OS are realy doing a good job of rethinking the starting point for systems design:

http://research.microsoft.com/os/singularity/

Dangerous, pointer-ish languages which lean upon bulky and poorly granular address space mechanisms. It's so... 1960's. Hats off to them for drilling down into a new approach.


this can also be considered the original 801/risc from better than three decades ago
https://www.garlic.com/~lynn/subtopic.html#801

there was no (hardware) protection domains. the operating system (cp.r) would only load "correct" (pl.8) programs ... and pl.8 would only generate "correct" programs.

it was 32bit virtual addressing with 256mbyte segments ... 16 "segment" registers (and inverted tables). i once complained that limitation of only 16 "segments" made it hard to implement various memory mapped abstractions. the explanation was that programs could change segment register values as easily as they could change general/address register values ... so an application needing access to an additional virtual memory object could switch a segment register value ... as easily as general register value can be changed.

in the early 80s, once of the 801/risc efforts was romp chip targeted for opd displaywriter follow-on. when that project was canceled, some investigation came up with retargeting the hardware to the (emerging) unix workstation market. the company that had done the unix port to pc (pc/ix) was hired to do one to romp. it was eventually announced as pc/rt with aixv2. hardware projection domain had to be implemented in romp for the unix system paradigm.

another dependable microkernel effort is the eros, coyotos, capros activity
http://www.eros-os.org/
http://www.coyotos.org/
http://www.capros.org/

that traces directly to KeyKOS ("eros derivative of KeyKOS for Intel-family machines")
http://cap-lore.com/CapTheory/upenn/

which was project started by Tymshare on 370 as GNOSIS. When M/D bought Tymshare ... GNOSIS was spun-off as KeyKOS (disclaimer, I was brought in to review GNOSIS as part of the spin-off process)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Mon, 21 Apr 2008 16:12:52
re:
https://www.garlic.com/~lynn/2008h.html#11 The Return of Ada

Emerging Market Oil Use Exceeds U.S. as Prices Rise
http://www.bloomberg.com/apps/news?pid=20601109
http://www.bloomberg.com/apps/news?pid=20601109&sid=a_YCEx7do3LQ&refer=home

from above:
China, India, Russia and the Middle East for the first time will consume more crude oil than the U.S., burning 20.67 million barrels a day this year, an increase of 4.4 percent, according to the International Energy Agency in Paris. U.S. demand will contract 2 percent to 20.38 million barrels daily, the IEA says.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

handling the SPAM on this group

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: handling the SPAM on this group
Newsgroups: alt.folklore.computers
Date: Mon, 21 Apr 2008 18:52:03
D.J. <solosam75@cableone.net> writes:
I noticed that sometimes the letter options for a command varies by version of Unix.

The university I worked at some years ago had 386 unix on a few computers we could telnet to up on the main campus, and we had an AIX system in the local computer room for email. The commands were, as you point out, identical. But the option letters for the AIX system were not the same as the option letters, for the same command, as used by the 386 unix. 386 in this instance being an Intel desk top computer.


re:
https://www.garlic.com/~lynn/2008h.html#14 Two views of Microkernels

AIX V2 for pc/rt (risc) was an at&t unix port by the company that had done the port to the pc for PC/IX.

other references
https://www.garlic.com/~lynn/subtopic.html#801

aix/386 (and aix/370) was port of UCLA's Locus system (which supported BSD unix semantics).

a Unix History
http://www.levenez.com/unix/

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Tue, 22 Apr 2008 09:14:55
"Rostyslaw J. Lewyckyj" <urjlew@bellsouth.net> writes:
For the employers and Politicians and the tax collectors illegal immigrants are most convenient. The employers get cheap labor with an implicit threat over the employees with regard to wages and benefits. The politician has a ready target and no worry about these people voting. The Social security and other tax people can collect their taxes but deny benefits.

GAO has done a study of that ... and found that illegal immigrants receive about 50percent more in benefits than what they contribute. Other organizations have done similar studies ... with similar findings ... but some have discounted various organizations/results as having political agendas ... while the GAO has had a fairly solid reputation of being non-biased (making it harder to discount)

a different interpretation of the numbers is that employers, paying substandard wages, pocket the difference between what they pay them and what it costs the rest of society to provide the necessary care&feeding (in effect general society, govs, etc ... are providing tens/hundreds of billions in subsidies to employers hiring illegal aleans).

some recent posts/threads:
https://www.garlic.com/~lynn/2007i.html#70 illegal aliens
https://www.garlic.com/~lynn/2007i.html#79 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#81 illegal aliens
https://www.garlic.com/~lynn/2007o.html#22 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007q.html#61 Horrid thought about Politics, President Bush, and Democrats
https://www.garlic.com/~lynn/2007t.html#46 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008.html#39 competitiveness

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IT full of 'ducks'? Declare open season

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: IT full of 'ducks'? Declare open season
Newsgroups: alt.folklore.computers
Date: Tue, 22 Apr 2008 09:40:48
IT full of 'ducks'? Declare open season
http://www.infoworld.com/news/feeds/08/04/21/IT-full-of-ducks-Declare-open-season.html

from above:
Every organization has some "ducks." Ducks are employees who have a detrimental effect on productivity. Their work is consistently substandard, they rarely meet deadlines, and their skills are out of date. They hate change, resist taking responsibility, and blame their failures on co-workers. They constantly complain about their projects, their teammates, their workloads and their managers. They stifle innovation by shooting down new proposals, claiming that changes "just can't be done."

... snip ...

there used to be a "wild duck" metaphor ... that had the exact opposite characteristics (listed in the above for "ducks") ... constantly thought outside the box and provided much of the productivity for the organizations. However, institutional "open season" on "wild ducks" tended to be much more active than anything done about "ducks".

misc. past posts mentioning "wild duck"
https://www.garlic.com/~lynn/2007b.html#38 'Innovation' and other crimes
https://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Tue, 22 Apr 2008 13:10:18
Anne & Lynn Wheeler <lynn@garlic.com> writes:
a different interpretation of the numbers is that employers paying substandard wages, pocket the difference between what they pay them and what it costs the rest of society to provide the necessary care&feeding (in effect general society, govs, etc ... are providing tens/hundreds of billions in subsidies to employers hiring illegal aleans).

it is possible to draw an analogy with cheap labor
https://www.garlic.com/~lynn/2008h.html#17 The Return of Ada

and cheap oil
https://www.garlic.com/~lynn/2008h.html#11 The Return of Ada

and cheap water as gov/public subsidies to special interests ... those that can make significant profit and/or take other benefit from the subsidy.

a few years ago there was an article about rice growers in the delta getting large amounts of water at five cents on the dollar (from the gov.) during periods of significant draught and rationing (growing rice in that area wouldn't have been remotely justified w/o the significant supply and subsidy).
https://www.garlic.com/~lynn/2001f.html#4 some VLIW (IA-64) projections from January, 1999...
https://www.garlic.com/~lynn/2003i.html#17 Spam Bomb
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#24 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

handling the SPAM on this group

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: handling the SPAM on this group
Newsgroups: alt.folklore.computers
Date: Tue, 22 Apr 2008 13:28:22
jmfbah <jmfbahciv@aol> writes:
Not at all. Shells exist and all shells are apparently different. And it takes me forever to circumnavigate through the BUI mess. It doesn't help that I have absolutely no idea what I'm doing. How in the world does the rest of the world manage? No wonder there are security problems.

recent comments about significant percentage of the security problems all swirling around information leakage that represents one hundred times more value to the attacker than it does to the defender ... aka from simple kindergarten security 101 ... and security proportional to the risk.
https://www.garlic.com/~lynn/aadsm15.htm#39 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm19.htm#40 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm28.htm#60 Seeking expert on credit card fraud prevention - particularly CNP/online transactions
https://www.garlic.com/~lynn/aadsm28.htm#64 Seeking expert on credit card fraud prevention - particularly CNP/online transactions

and the attackers can afford to outspend the defenders 100-to-1, the only viable, practical, long-term solution is to change the paradigm, eliminating the value of the information to the attackers.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

To the horror of some in the Air Force

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: To the horror of some in the Air Force
Newsgroups: alt.folklore.computers
Date: Wed, 23 Apr 2008 14:47:51
for a little boyd topic drift ...

Why the Air Force Bugs Gates
http://www.time.com/time/nation/article/0,8599,1733747,00.html

from above:
To the horror of some in the Air Force, Gates cited the late John Boyd, who attained the rank of Air Force colonel, as an example young officers should emulate. Gates called him "a brilliant, eccentric and stubborn character" who had to bulldoze his way through the Air Force hierarchy to launch the F-16 fighter, now regarded as perhaps the best value in the skies.

... snip ...

other blogs:

The Ghost of Boyd Invoked
http://globalguerrillas.typepad.com/globalguerrillas/2008/04/journal-the-gho.html
SECDEF Gates honors John Boyd
http://www.d-n-i.net/dni/2008/04/21/secdef-gates-honors-john-boyd/
War, Chaos, and Business
http://www.chetrichards.com/

... other past posts mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Toyota takes 1Q world sales lead from General Motors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Toyota takes 1Q world sales lead from General Motors
Newsgroups: alt.folklore.computers
Date: Wed, 23 Apr 2008 15:47:43
GM barely held sales lead last year while racking up an enormous ($38.7b) loss (compared to Toyota's appox $17b profit)

Toyota takes 1Q world sales lead from General Motors
http://biz.yahoo.com/ap/080423/gm_global_sales.html?.v=8

from above:
GM barely won the global sales race with Toyota last year, but Toyota overtook it as the world's top automaker as measured by global vehicle production in 2007.

... snip ...

recent posts:
https://www.garlic.com/~lynn/2008.html#80 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008.html#84 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008.html#86 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#55 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#56 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#59 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#75 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#76 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#1 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#5 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#6 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#7 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#8 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#12 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#13 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#14 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#17 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#19 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#20 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#21 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#22 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#25 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#44 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#46 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#56 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#63 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#66 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#68 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#69 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#71 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#89 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#90 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#91 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#0 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#4 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#5 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#7 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#9 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#10 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#21 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#22 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008d.html#26 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#85 Toyota Sales for 2007 May Surpass GM

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM's Webbie World

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: IBM's Webbie World
Newsgroups: alt.folklore.computers
Date: Wed, 23 Apr 2008 16:23:29
IBM's Webbie World
http://www.forbes.com/technology/2008/04/21/ibm-social-netwo

from above:
IBM's spokespeople claim it has 24,000 Facebook users and 155,000 LinkedIn users, giving it one of the biggest corporate representations on both sites.

... snip ...

when i got blamed for online computer conferencing on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

in the early 80s (there was short datamation article about it, i think nov81) .... I had a "names" file with only something like 15,000.

various old email about the internal network
https://www.garlic.com/~lynn/lhwemail.html#vnet

somewhat as result of various corp hdqtrs investigations into the phenomena ... I got an investigator that sat in the back of my office and took notes on how I communicated. They also got copies of all my incoming and outgoing email and logs of all instant messaging activity. The report was also a stanford phd thesis (joint between language and computer AI) as well as material for books and papers. some related posts on computer mediated communication
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Thu, 24 Apr 2008 07:14:39
greymaus <greymausg@mail.com> writes:
I have a box of floppies from work, say, 20 years ago. The thought of trying to read them is offputting, even if I had a machine with a floppy drive. Whole different world. Just thought, must be 25 years. Wow.

old thread (partially successful):
https://www.garlic.com/~lynn/2006s.html#35 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#36 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#37 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#56 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#57 Turbo C 1.5 (1987)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 24 Apr 2008 20:16:58
Peter Flass <Peter_Flass@Yahoo.com> writes:
Yes, they did. As I recall the CPUs used asymmetric multiprocessing (master-slave), but otherwise it was comparable to the systems of today -- in 1968.

re:
https://www.garlic.com/~lynn/2008h.html#14 Two views of Microkernels (Re: Kernels

360/67 ... early '67 ... had virtual memory and segment. cp40 morphed into cp67 when it was moved from 360/40 with custom virtual memory hardware to 360/67 with standard virtual memory hardware.

standard 360/65 could be used in both loosely-coupled as well as tightly coupled (symmetric multiprocessor). however, 360/65 multiprocessor was independent 360/65 machines that were wired together so that they would address common memory ... but could also be configured to operate as independent uniprocessors. the issue with 360/65 multiprocessor was nothing was done about i/o. For 360/65 to simulate multiprocessor i/o, the device controllers had to be configured with multi-channel interfaces and each processor had its own channel attachment to every controller (this was the same strategy used for loosely-coupled operation w/o common real memory addressing; symmetric multiprocessor i/o was simulated by configuring the processor private channels at the same addresses).

while, the 360/67 uniprocessor was pretty much a 360/65 with virtual memory hardware added as standard feature. however, 360/67 multiprocessor was something of a new beast ... since it had support for all processors being able to access all channels in the configuration.

As an aside, charlie invented compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

when he was working on cp67 smp fine-grain locking at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

compare&swap, after some resistance was included in 370.

while all processors in the 370 line eventually got virtual memory support ... the 370 smp support continued the 360 i/o smp implementation, all processors addressing common real stroage ... but having their own private i/o channels.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Thu, 24 Apr 2008 20:39:12
krw <krw@att.bizzzzzzzzzz> writes:
It's fine if your employer does this on its own. It's fine if you give 90% of your income to the "poor" too, but unfunded mandates are wrong.

unfunded mandates are worse than analogy to living off principal and/or non-renewable resource ... when its gone ... infrastructure crashes and/or has to undergo radical change.

this has been the comptroller general's tirad for some time
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom

another consideration affecting many of the unfunded mandates:
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#5 The Workplace War for Age and Talent

current infrastructure has much the funding for the retired being funding by the dramatically larger number of workers in the baby boomer population bubble. as the baby boomers reach retirement ... there is an enormous increase in the number of retirees ... while the following generation (of workers) is only half as large. As a result the income revenue base per retiree is reduced to possibly only 1/16th (the current ratio). a large part of the unfunded mandates is supplying benefits to the enormous increase in (baby boomer) retirees from a much smaller revenue base.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 24 Apr 2008 23:40:17
peter@taronga.com (Peter da Silva) writes:
Process isolation is not the only purpose of VM.

re:
https://www.garlic.com/~lynn/2008h.html#14 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#25 Two views of Microkernels (Re: Kernels

ala the original 801/risc from the 70s ... led to romp being described as having 40bit (virtual) addressing.

the machine was 32bit address ... 28bit segment displacement 4bit segment register index.

however, since there was no protection domain and inline application code could change segment table register value (and it was inverted table architecture) as easily as addresses could be change it general purpose register .... changing a segment register value as equivalent to changing addressing with a general purpose register. romp support 12bit segment register values ... so 28bit segment displacement plus 12bit segment register value ... yields 40bit virtual addressing.

the convention still lingered on with there being some descriptions of RIOS being a 52-bit (virtual) addressing (still 32bit addressing, 4bits for segment register ... but segment register value doubled from 12bits to 24bits). the machine line (romp originally for displaywriter follow-on) had already been retargeted to unix workstation and requiring privileged hardware domain for changing segment register values ... by the time of RIOS was done for rs/6000 & power.

re:
https://www.garlic.com/~lynn/subtopic.html#801

801/risc also had database/transactional memory support ... system could go around behind the scenes figuring out what transaction storage "lines" had been changed and required logging ... w/o application needing explicit calls to log transactional changes.

this aspect was used for original implemention of JFS (journaled filesystem) in aix3 on rs/6000 (later versions of JFS were made "portable" by changing paradigm to have explicit log calls when filesystem metadata was being changed).

some past posts mentioning database/transactional memory
https://www.garlic.com/~lynn/2005r.html#27 transactional memory question
https://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2007b.html#44 Why so little parallelism?
https://www.garlic.com/~lynn/2007n.html#6 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#36 How to flush data most efficiently from memory to disk when db checkpoint?
https://www.garlic.com/~lynn/2007o.html#12 more transactional memory for mutlithread/multiprocessor operation
https://www.garlic.com/~lynn/2008e.html#10 Kernels

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Fri, 25 Apr 2008 09:15:31
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Citigroup, Merrill May Post $15 Billion Writedowns, Times Says
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=a14SC3UVha.4&refer=home

from above:

Citigroup will have $10 billion of writedowns, taking its first-quarter loss to about $3 billion, the newspaper said. Some analysts say the Citigroup writedowns may stretch to $12 billion, it said. Merrill may have a $5 billion writedown, taking it to a $2.7 billion loss, the report said.

... snip ...


re:
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes

just now one of the business shows had two recent nobel winners in economics answering questions ... one comment was that he hoped that when congress gets around to punishing the "investment bankers" for the current mess, that they also didn't punish the VCs.

one issue might be will Congress take any responsibility for repealing Glass-Steagall?

part of this is the analogy to CDOs being designed to defeat "observe" in Boyd's OODA-loop ... i.e. were used two decades ago in the S&L crisis to obfuscate the underlying value.
https://www.garlic.com/~lynn/2008f.html#4 CDOs subverting Boyd's OODA-loop

long-winded, decade old post including mention of needing visibility into underlying value for CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

mortgage originators used to have to pay attention to mortgage quality since their subsequent revenue would include performance of the loan. Being able to immediately unload mortgages as toxic CDOs (w/o regard to quality) met that their revenue became how fast they could originate and unload mortgages. Subprime loans (w/o regard to quality, qualifications, etc) allowed them to expand their mortgage origination markets (people that wouldn't otherwise qualify, speculators that were looking to minimize their investment and maximize ROI on holding and then flipping the property).

investment bankers had been buying the (sub-prime) toxic CDOs ... and then borrowing full-value against the toxic CDO and using that to buy another CDO. Repeated 50-100 times met they would only have 1-2 percent of the total value in actual capital. When the sub-prime mortgage values did eventually start to leak thru ... there were 20-40 percent (more) write-down in those ("crap". a technical term used by one of the nobel winners) toxic CDOs.

misc. past posts mentioning investment bankers:
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008f.html#13 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#73 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#77 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#94 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#95 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#2 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#0 independent appraisers

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

DB2 & z/OS Dissertation Research

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DB2 & z/OS Dissertation Research
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 25 Apr 2008 19:02:27
promos@BURCHWOODUSA.COM (Todd Burch) writes:
Going waaaaay back, look into the instigators for cross memory (AKA XA), and you'll find DB2's names at the top of the list.

x-memory/dual-address for 3033 ... was Q&D solution to address exploding size of common segment in larger installations.

way before DB2.

relational dbms was system/r all done on vm370 at san jose research in bldg. 28 ... lots of past posts
https://www.garlic.com/~lynn/submain.html#systemr

there was system/r technology transfer from sjr to endicott for sql/ds about the timeframe of 3081.

there was some amount of competition between the "60s database" in stl and system/r in sjr. STL pointed out relational doubled the physical database size (additional space needed by the indexes) and significantly increased the physical disk accesses (mostly related to transversing the indexes). SJR pointed that "60s databases" exposed direct pointers which required a lot of system administrative overhead and increased the application complexity. Going into the 80s, disk space became significantly cheaper (mitigating the relational increase in disk space requirements for indexes) and real storage sizes became significantly larger (allowing relational indexes to be cached ... eliminating a lot of the index physical disk reads). This allowed relational to move into much broader market (decreasing hardware costs, increasing hardware resources and needed much lower people skill and resources for database care & feeding).

one of the people mentioned in this meeting
https://www.garlic.com/~lynn/95.html#13

claimed to have handled much of the technology transfer from endicott back to stl/bldg90 for DB2.

for some other random topic drift ... old email when jim was leaving fro tandem and foisting off consulting/contacts to me ... including consulting to the IMS group:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

recent posts discussing dual-address space (sort of subset of access registers that would show up with xa):
https://www.garlic.com/~lynn/2008c.html#33 New Opcodes
https://www.garlic.com/~lynn/2008c.html#35 New Opcodes
https://www.garlic.com/~lynn/2008d.html#69 Regarding the virtual machines
https://www.garlic.com/~lynn/2008e.html#14 Kernels
https://www.garlic.com/~lynn/2008e.html#33 IBM Preview of z/OS V1.10

and mentioning that one of the main itanium architects is also credited with dual-address space for 3033
https://www.garlic.com/~lynn/2008g.html#60 Different Implementations of VLIW

part of the issue was that the 370 product pipeline had gone dry during the future system project period (which was going to completely replace all 370). when FS got killed
https://www.garlic.com/~lynn/submain.html#futuresys

old post with some extracts from fergus/morris book discussing effects of FS effort:
https://www.garlic.com/~lynn/2001f.html#33

there was a mad rush to get stuff back into the 370 product pipeline ... overlapped with getting XA moving ... which was going to take 7-8 yrs. Interim stop-gap was 303x. The integrated channel microcode from 370/158 was repackaged as 303x "channel director". The 370/158 was repackaged as a 3031 (w/o the integrated channel microcode) with a (2nd 158 microengine) channel director. The 370/168 was repackaged as 3032 (with 1-3 channel directors). The 3033 started out as 168 wiring diagram mapped to faster chip technology.

there was "eagle" ... which wasn't relational. The databases that would have been consideration at the time of the XA architecture being specified (i.e. referred to as "811") would have been IMS and possibly some misc. stuff related to eagle.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Stanford University Network (SUN) 3M workstation

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stanford University Network (SUN) 3M workstation
Newsgroups: alt.folklore.computers
Date: Fri, 25 Apr 2008 21:24:44
harker writes:
I have now found Andy's "The SUN Workstation Architecture" paper on line at: ftp://reports.stanford.edu/pub/cstr/reports/csl/tr/82/229/CSL-TR-82-229.pdf If you have his SIGGRAPH '80 paper, I would love to get a copy of it.

old post mentioning people at palo alto science center being approched about producing sun workstation product:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

VTAM R.I.P. -- SNATAM anyone?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTAM R.I.P. -- SNATAM anyone?
Date: Fri, 25 Apr 2008 22:52:21
Newsgroups: bit.listserv.vmesa-l
Jim Bohnsack wrote:
I remember the SNATAM name now. There was an Englishman, Graham Pursey, who used to attend the VNET Project Team meetings that were held once or twice a year. It seems to me that he was involved in some kind of VM based VTAM project. Was that it or was there something else? It seems to me that there was something besides SNATAM.

Getting old and memory is the second thing to go. Don't remember what the first was.


from the 26-28feb80 VMITE schedule:
Graham Pursey - SNATAM. This system is being perfected in Hursley to operate SNA devices from a CMS based system. The current direction is to make this into a product. 45 minutes to 1 hr

... snip ...

there were constant battles with the communication group ... I got into all sort of problems with hsdt (high speed data transport) project ...
https://www.garlic.com/~lynn/subnetwork.html#hsdt

to place things in better perspective ... SNA wasn't networking ... it was dumb terminal communication.

example of gap between the communication group and hsdt project; recent retelling
https://www.garlic.com/~lynn/2008e.html#45

of an announcement (one friday) by the communication group for a new internal conference. included in the announcement were these definitions (to be used for the conference):
low-speed: <9.6kbits medium-speed: 19.2kbits high-speed 56kbits very high-speed 1.5mbits

the next monday on a business trip to the far east, definition on the conference room wall
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits

also working with various parties associated with getting NSFNET going.
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

eventually we weren't allowed to bid on NSFNET backbone ... even tho a NSF audit of our high-speed backbone claimed that what we already had running (internally) was at least five years ahead of all NSFNET bid submissions. some related old email from the period
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

including some stuff forwarded to us about communication group spreading FUD that sna & vtam could be used for NSFNET.
https://www.garlic.com/~lynn/2006w.html#email870109

for other topic drift, the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

(which wasn't SNA until the late 80s) was technology from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and was larger than the arpanet/internet from just about the beginning until sometime mid-85. this was about the time that serious efforts were made to try and get the internal network converted over to sna (and also contributed to the internet exceeding the internal network).

in this period there was a big explosion in internet nodes from workstations and PCs. SNA was still treating internal network as something that was purely (mainframe) host-to-host ... and the exploding numbers of PCs were to continue to be served by terminal emulation. some past posts
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Sat, 26 Apr 2008 09:07:10
jmfbah <jmfbahciv@aol> writes:
One of the callers of this radio show asked for the definition of sub-prime. Nobody could answer. So the term has become the kleenix of all the US' economic problems. Congress critters will take advantage of this ignorance.

re:
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes

just because it isn't explained on a radio show ... doesn't mean that it isn't known. that is somewhat independent of whether or not they want to explicitly lay blame.

in general, sub-prime are ARM/VRM (adjustable rate/variable rate) with sub-prime "teaser" rate that adjusts upwards after initial period. mortgage originators would use them in conjunction with other inducements (i.e. circumventing standard business practices that would represent difficulty in approving loan) like no down payments, interest only payments (during the teaser period), no documentation, etc. These were especially attractive to speculators ... who were anticipating on flipping the property before the end of the teaser period. It was the increasing/alarming number of risk factors associated with typical subprime loan toxic CDOs (along with lack of any visibility) that made them especially vulnerable.

CDOs were used for lots of credit related activity ... other than sub-prime loans ... but the sub-prime loan toxic CDOs ... as a category ... carried the largest systemic risk (most likely to have problem at the end of the teaser period). sub-prime toxic CDOs represented both a large percentage of all toxic CDOs and especially a large percentage of toxic CDOs that had built in problems and risk.

A combination of lack of visibility into underlying value of all toxic CDOs (used two decades ago during S&L crisis to obfuscate underlying value) and large percent of sub-prime toxic CDOs all experiencing problems at approx the same time (end of "teaser" period) precipitated lack of confidence in all toxic CDOs ... leading to the enormous write-downs.

It wasn't necessarily that all toxic CDOs were having significant problems ... it was that toxic CDOs were designed to obfuscate the underlying value ... and when some toxic CDOs started to have significant problems ... then all toxic CDOs became suspect. This is somewhat related to use of term toxic for CDOs ... and making an analogy to contaminated consumer food/drug products. All of it can get pulled off the shelves and dumped ... even if only an extremely small percentage is affected.

long-winded, decade old post referring to needing visibility into underlying value of CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

past reference to estimate that 1000 are responsible for 80% of the current mess (and it could go a long way if the gov. could figure out how they would loose their job)
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/aadsm28.htm#57 Who do we have to blame for the mortgage crisis in America?

also reference to analogy of toxic CDOs to subverting "observe" in Boyd's OODA-loop
https://www.garlic.com/~lynn/2008g.html#4 CDOs subverting Boyd's OODA-loop

the rush to dump all toxic CDOs (analogous to dumping contaminated consumer food/drug products) caught the investment bankers off guard who were leveraged 40-50 times ... estimate of possibly something like $45 trillion in such instruments. There may have been only one trillion of actual investment ... with 20-40% (or larger) writedowns dumping toxic CDOs ... these investment bankers could be wiped out.

the mess was aggravated by the repeal of Glass-Steagall a decade ago. In the wake of crash of '29, Glass-Steagall was put in place to keep the unregulated risky activity of investment bankers from contaminating the safety&soundness of regulated banking.

other past posts mentioning Glass-Steagall:
https://www.garlic.com/~lynn/2008b.html#12 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008e.html#42 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#59 independent appraisers
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#13 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#46 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#73 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#75 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#79 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#94 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#96 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#97 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#2 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#16 independent appraisers
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#57 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 26 Apr 2008 09:29:45
Greg Menke <gusenet@comcast.net> writes:
How can a "proof checker" possibly detect code that computes additional code at runtime? By that I mean, the proof checker observes a perfectly reasonable set of instructions legitimately messing around with memory, but the thing it doesn't realize is the data that the program generates is another program- which can then be executed without having been examined by the proof checker. To discover this behavior the proof checker will have to run the program itself (basically it will have to scrub all possible code paths- even those which appear to be impossible to reach). Oh, and by the way, the proof checker has to be able to detect the legitimate use of run-time code generation and differentiate it from the unsafe.

the "proof checker" checks the original code ... and directly executed code isn't allowed (unless it has been run thru some sort of "proof checker"). in the 801/risc case from the 70s with cp.r & pl.8 ... cp.r would only allow load/run/execute of valid (acceptable) produced code (by pl.8 compiler). in this scenario ... pl.8 compiler effectively has the proof checker integrated with code generation ... and cp.r only would only allow pl.8 generated code to be load/run/execute.

there is always programs that exhibit different executable behavior because of different inputs. there is the scenario about how does the "proof checker" handle the case of all possible inputs. this scenario then can be extended to an interpreter where the possible inputs form some sort of programm language.

re:
https://www.garlic.com/~lynn/2008h.html#14 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#25 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#27 Two views of Microkernels (Re: Kernels

a somewhat related side-effect of 801/risc was separate (non-coherent) I&D (store-in) caches. Compiler/loader managed code would show up in the D-cache. To even get it to the I-cache ... it first has to be flushed to main memory (in order for the I-cache to be able to fetch it). Loaders (even in unix paradigm) on 801/risc had to have special instruction to flush D-cache back to real storage ... before there was some chance that the I-cache would see it (and therefor be available to the instruction execution unit).

misc. past 801/risc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 26 Apr 2008 21:30:13
peter@taronga.com (Peter da Silva) writes:
OSI is no more a walled garden than TCP is. No network protocol stack that isn't built around a cryptographic layer can be a walled garden if you have physical access to the network. And you have to have physical access to the network to use the network.

I've used OSI TP0/CLNP and TP4/CONS networks, and they were just as open as TCP/IP.


OSI was product of the copper telco mentality for traditional homogeneous network service.

the internal network was larger than the arpanet/internet from just about the beginning until approx. mid-85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

i've claimed a big contribution was that the internal network nodes had a form of gateway implementation from the beginning ... something that arpanet/internet didn't get until the great change over to internetworking protocol on 1/1/83.
https://www.garlic.com/~lynn/subnetwork.html#internet

OSI never did. ISO further exacerbated the situation by mandating that there could not be any standards that didn't conform to OSI model. we were involved in trying to interest X3S3.3 (us/asc iso chartered standards body responsible for osi level 3&4 standards) in HSP (high-speed protocol). it wasn't possible because:
1) HSP went directly from transport interface to lan/mac interface ... bypassing level3/level4 interface ... violating OSI

2) HSP supported LAN/MAC interface which sits somewhere in the middle of level3 ... something that wasn't defined in OSI model

3) HSP supported internetworking ... something that doesn't exist in OSI model


misc. past posts about HSP and getting rejected by X3S3.3 because of ISO requirements to conform to OSI model
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

For a little other drift ... ISO doesn't require that a standard actually have an implementation and/or be implementable. IETF requires that there be interoperable implementations before progress in the standards process.

ISO charges for standards documents

IETF standards are openly available ... for instance my IETF RFC index
https://www.garlic.com/~lynn/rfcietff.htm

RFC summaries show up in the lower frame; clicking on the ".txt=nnnn" field retrieves the actual RFC.

for additional topic drift ... recent posts with references to my IETF RFC index
https://www.garlic.com/~lynn/2008h.html#2 The original telnet specification?
https://www.garlic.com/~lynn/2008h.html#13 How fast is XCF

for other recent post with some additional topic drift
https://www.garlic.com/~lynn/2008h.html#31 VTAM R.I.P. -- SNATAM anyone?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 26 Apr 2008 21:49:55
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
For the 432, an FPGA emulation would probably be overkill, since a software emulation on today's PC's would probably be many times faster than the original.

re:
https://www.garlic.com/~lynn/2008h.html#14 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#25 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#27 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#33 Two views of Microkernels (Re: Kernels
https://www.garlic.com/~lynn/2008h.html#34 Two views of Microkernels (Re: Kernels

intel people gave presentation on 432 at sigops ('81?). one of the things they mentioned was that 432 had several significantly complex functions defined in hardware, implemented directly in silicon. the complex functions were subject to some amount of bugs and 432 process was running into significant problems with producing corrected silicon.

misc. past posts mentioning 432
https://www.garlic.com/~lynn/2000d.html#57 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000d.html#62 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001k.html#2 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002d.html#46 IBM Mainframe at home
https://www.garlic.com/~lynn/2002l.html#19 Computer Architectures
https://www.garlic.com/~lynn/2002o.html#5 Anyone here ever use the iAPX432 ?
https://www.garlic.com/~lynn/2002q.html#11 computers and alcohol
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#55 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003m.html#23 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#47 Intel 860 and 960, was iAPX 432
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#52 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#64 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005k.html#46 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005q.html#31 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006s.html#57 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2007d.html#61 ISA Support for Multithreading
https://www.garlic.com/~lynn/2007s.html#17 Oddly good news week: Google announces a Caps library for Javascript
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#54 Throwaway cores

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 27 Apr 2008 08:52:03
Pat Farrell <fishytv@pfarrell.com> writes:
But wasn't DEC (or Digital) a major player in OSI? With DECnet directly mapping into OSI, and DEC trying to sell us all on DECnet rather than TCP/IP?

re:
https://www.garlic.com/~lynn/2008h.html#34 Two views of Microkernels (Re: Kernels

remember that some people believed the fed gov, GOSIP, mandates about internet being replaced with osi, etc.

old post with old INTEROP 88 announcement with OSI comments, also comments from rfc2441 about GOSIP, osi, etc.
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...

other posts mentioning various things about INTEROP 88 (sort of during heyday of some believing in fed. mandates; they didn't understand how fundamentally important internetworking is):
https://www.garlic.com/~lynn/subnetwork.html#interop88

some past posts mentioning dec & osi (including some old decnet/OSI articles):
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#34 Blame it all on Microsoft
https://www.garlic.com/~lynn/2003c.html#30 difference between itanium and alpha
https://www.garlic.com/~lynn/2003e.html#71 GOSIP

and as before, misc. past posts mentioning HSP, OSI:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

for other topic drift ... huge explosion in mid-range market sales; both vax/vms and 43xx machines
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

post about decade plus of vax market numbers, sliced & diced by model, year, us/non-us, etc. can see that by mid-80s, mid-range market was starting to decline (giving way to workstations and large PCs)
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

a big reason that 43xx outsold vax were large corporate orders of machines in quantities of multiple hundreds (until they started to give way to workstations and large PCs).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fixed-Point and Scientific Notation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fixed-Point and Scientific Notation
Newsgroups: bit.listserv.ibm-main
Date: Sun, 27 Apr 2008 09:02:19
dsh1@TAMPABAY.RR.COM (Don Higgins) writes:
DFP Decimal IEEE 754r FP Significant digits 7 16 34 Maximum exponent 96 384 6144

All of these formats are supported by z390 on Windows and Linux with CTD and CFD conversion routine macros and supervisor calls for converting between EBCDIC/ASCII character scientific notation and any of the above binary formats. All corrections and comments welcome.

Don Higgins don@higgins.net www.z390.org


Mike gave talk on 754r decimal FP, thursday at HILLGANG meeting ... included some interesting background and performance numbers about decimal FP justification (as well as how long things can get dragged out in standards process).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Boyd again

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Boyd again
Newsgroups: alt.folklore.computers
Date: Sun, 27 Apr 2008 19:21:09
Michael Black <et472@ncf.ca> writes:
Well he didn't show up there because he was discussed here.

We all face a constant wave of information flowing at us. Most of it flows around us, because it either doesn't interest us or has no relevance.

Two weeks ago, any reference to the guy (I have no idea who you are talking about) would have flowed around you because it didn't mean anything.

Now, it does so it hits you in the head rather than flowing around.

A different analogy would be that your filters changed, so now he can be let in.


i've mentioned boyd sporadically ... lots of past posts mentioning boyd ... many in this n.g. going back to '94
https://www.garlic.com/~lynn/subboyd.html#boyd
and some number of Boyd URLs from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

google search just now claims that there are approx. 27,300 english pages

i was introduced to boyd in the early 80s and was fortunate enuf to sponsor some of his briefings.

at one time Boyd ran possibly the largest datacenter in the world ... or at least in the far east at "spook base" (one of the biographies mentions the datacenter as a $2.5b windfall for ibm).

Boyd has been credited with battle plan for the earlier gulf conflict ... and the VP has been quoted as problem going into the current gulf conflict was that Boyd had died in '97.

post last wed in this n.g.
https://www.garlic.com/~lynn/2008h.html#21 To the horror of some in the Air Force

referencing Gates recently paying tribute to Boyd mentioned in this time magazine article (21apr2008):
http://www.time.com/time/nation/article/0,8599,1733747,00.html

there have been articles that even tho Boyd was a retired air force col (and credited with at least the f16 and also instrumental in design of several other planes) ... when he was buried at arlington ... it was the marines that showed up ... not the air force ... and his works went to the marine museum.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IT vet Gordon Bell talks about the most influential computers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject:  IT vet Gordon Bell talks about the most influential computers
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 29 Apr 2008 09:19:17
Anne & Lynn Wheeler <lynn@garlic.com> writes:
for other topic drift ... huge explosion in mid-range market sales; both vax/vms and 43xx machines
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

post about decade plus of vax market numbers, sliced & diced by model, year, us/non-us, etc. can see that by mid-80s, mid-range market was starting to decline (giving way to workstations and large PCs)
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

a big reason that 43xx outsold vax were large corporate orders of machines in quantities of multiple hundreds (until they started to give way to workstations and large PCs).


re:
https://www.garlic.com/~lynn/2008h.html#36 Two views of Microkernels (Re: Kernels

IT vet Gordon Bell talks about the most influential computers
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9080499

from above:
In an interview with Computerworld, Bell talked about his favorite computer of all time, the state of telepresence and what he wishes people knew about his good friend and Microsoft research colleague Jim Gray who was lost at sea last year.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

3277 terminals and emulators

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3277 terminals and emulators
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 29 Apr 2008 18:47:45
patrick.okeefe@WAMU.NET (Patrick O'Keefe) writes:
I thought the AS/400 grew out of the 8100, but I suppose it may have had mixed parentage. (Or I may be remembering wrong.)

previous post in thread:
https://www.garlic.com/~lynn/2008h.html#9 3277 terminals and emulators

the folklore is that after future system project was terminated
https://www.garlic.com/~lynn/submain.html#futuresys

also this old post:
https://www.garlic.com/~lynn/2001f.html#33

some number retreated to rochester and did the s/38.

i've claimed that somewhat in parallel, the 801/risc project went on ... with an objective of going to the exact opposite extreme of future system hardware complexity.
https://www.garlic.com/~lynn/subtopic.html#801

somewhere along the line, a project was started to replace the large variety of internal microprocessors with 801/risc. there was "fort knox" and iliad chips. One of these "iliad" efforts was to replace all the microprocessors in entry and mid-range 370s with (801/risc) iliad chips; the 4381 (4341 follow-on) microprocessor originally started out to be a iliad chip. iliad chip was also going to be used for the as/400 microprocessor (follow-on to the s/38). Both efforts were still born. Custom cisc chips were eventually done for both the 4381 as well as for the as/400.

8100 used a totally different chip, uc.5 ... significantly underpowered.

there is old email about the MIT Lisp machine project asking IBM for 801/risc chips for their machine ... and being offered 8100 instead; old email reference:
https://www.garlic.com/~lynn/2006c.html#email790711
in this post
https://www.garlic.com/~lynn/2006c.html#3 Architectural support for programming languages

as an aside ... at one point they sent my wife in to audit the 8100 effort and she recommended the whole thing be killed off.

much later there was the power/pc project (i.e. somerset, joint with ibm, motorola, apple, et al) ... and as/400 finally did move off a cisc processor to 801/risc (power/pc).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IT vet Gordon Bell talks about the most influential computers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IT vet Gordon Bell talks about the most influential computers
Newsgroups: alt.folklore.computers
Date: Tue, 29 Apr 2008 18:57:15
eugene@cse.ucsc.edu (Eugene Miya) writes:
Yeah, we are working on the Memorial.

re:
https://www.garlic.com/~lynn/2008h.html#39 IT vet Gordon Bell talks about the most influential computers

anne got our plane tickets nearly two months ago ... it is little more difficult coming from the east coast.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Return of Ada

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Return of Ada
Newsgroups: alt.folklore.computers
Date: Tue, 29 Apr 2008 23:06:46
Larry Elmore <ljelmore@verizon.spammenot.net> writes:
And if employees could get away with demanding $1M/yr for 1 hr of work they would. So what? People act in their own self interest. *Why* can businesses offer low wages and still get employees? It's called supply and demand. To the extent that government can legislate wages upward, it's only at the cost of reducing overall employment. If you don't believe that, then why can't the government just set a minimum wage of $20 or even $50/hr and make everyone at least comfortably middle class?

re:
https://www.garlic.com/~lynn/2008h.html#17 The Return of Ada
https://www.garlic.com/~lynn/2008h.html#19 The Return of Ada

i.e. gov/public have to make up the difference with the necessary social services to support workers earning substandard wages (in effect a gov. subsidy to their employers).

and from the people that generated hundreds of billions of dollars in losses in the write-down sweepstakes

The Fed's Too Easy on Wall Street
http://www.businessweek.com/#missing-article

from above:
Here's a staggering figure to contemplate: New York City securities industry firms paid out a total of $137 billion in employee bonuses from 2002 to 2007, according to figures compiled by the New York State Office of the Comptroller. Let's break that down: Wall Street honchos earned a bonus of $9.8 billion in 2002, $15.8 billion in 2003, $18.6 billion in 2004, $25.7 billion in 2005, $33.9 billion in 2006, and $33.2 billion in 2007.

... snip ...

and it is beginning to look like the gov. may be subsidizing them also.

misc. past posts mentioning the write-downs
https://www.garlic.com/~lynn/2008.html#90 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008f.html#14 independent appraisers
https://www.garlic.com/~lynn/2008f.html#15 independent appraisers
https://www.garlic.com/~lynn/2008f.html#32 independent appraisers
https://www.garlic.com/~lynn/2008f.html#75 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#13 independent appraisers
https://www.garlic.com/~lynn/2008g.html#20 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#57 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#0 independent appraisers
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#32 subprime write-down sweepstakes

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

handling the SPAM on this group

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: handling the SPAM on this group
Newsgroups: alt.folklore.computers
Date: Thu, 01 May 2008 07:55:33
Peter Flass <Peter_Flass@Yahoo.com> writes:
Oh good, religious war! ISPF here, XEdit on VM and THE on all unix platforms. Only the OS/2 Enhanced Editor (EPM) is better.

standard cms 3270 fullscreen support in the 70s involved using the screen for output ... but input was still simulated terminal input (not the fullscreen). I had done something similar in the 60s at the university ... modifying (cp67) cms editor to use 2250 (graphics display) as fullscreen output.

the first cms 3270 fullscreen editor with both fullscreen input&output was edgar. there were then internal wars between edgar and the internal developed RED. then there were wars between whether xedit should be released as a product or RED (RED having been around longer, much more mature, more features, and better performance). however, the internal people responsible for RED were a lot further away from the product group. misc. past posts:
https://www.garlic.com/~lynn/2002p.html#39 20th anniversary of the internet (fwd)
https://www.garlic.com/~lynn/2005f.html#34 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2006n.html#55 The very first text editor
https://www.garlic.com/~lynn/2006t.html#15 more than 16mbyte support for 370
https://www.garlic.com/~lynn/2006u.html#26 Assembler question

with old email refs:
https://www.garlic.com/~lynn/2002p.html#email821122
https://www.garlic.com/~lynn/2005f.html#email800121
https://www.garlic.com/~lynn/2006n.html#email810531
https://www.garlic.com/~lynn/2006t.html#email800121
https://www.garlic.com/~lynn/2006u.html#email790606
https://www.garlic.com/~lynn/2006u.html#email800429

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 01 May 2008 09:17:30
Quadibloc <jsavard@ecn.ab.ca> writes:
A knee-jerk response, which doesn't cover what you are really talking about, would be:

Compilers are programs which run entirely in user mode on typical computer systems. They take a text file as input, and produce a binary file as output, and are not distinguishable from other data-processing programs. The only exception to this are the very unusual computers from Burroughs.

In UNIX, though, but not on the old mainframe operating systems that preceded it, files actually have to be marked as "executable", so this distinction isn't entirely exotic and rare.

But your point is, of course, that _since_ a compiler has enormous power to decieve the programmer about what the computer is actually going to do, compilers have to be trusted; they're a point of attack against the system.

On the IBM mainframe running MTS that I used in my student days, this was taken care of in a way - even though the system lacked the concept of marking files as executable.

$RUN *FORTG 0=PROG.OBJ

The little asterisk in front of FORTG meant that IBM's FORTRAN IV level G compiler, adapted to run under MTS instead of OS/360, wasn't a file in my account's disk space; it was a file that belonged to a certain privileged system account. So just anybody couldn't replace the compiler with a phony one.

But, in those days, since compilers were large and complicated programs, the idea that a production compiler for FORTRAN, COBOL, or PL/I could have been *mathematically proven* to produce correct object code for all possible valid program inputs would have gotten you laughed at.

Compilers were part of the security model, but including them there meant that attention was paid to avoid malicious tampering with them, not that they were expected to be perfect - although they did have to be very good in practice.


something similar was under CMS ... default was standard search path ... and transparently ran thru different types of executables ... that might share the same filename ... i.e. resolution was start with first possible kind of executable and then run the search path ... if not found, try the next executable type. CMS defaulted local area to start of search path ... with system areas later in the search path.

placing an EXEC file on user's local disk would be found before system compiler. This could be used by the user to provide some sort of custom preprocessing before the EXEC got around to invoking the actual/real system executable (like a compiler).

this was also identified as an attack vector ... reading a network file which might be a EXEC file with the name of some standard system executable.

BITNET
https://www.garlic.com/~lynn/subnetwork.html#bitnet

also had the xmas worm (a year before the morris worm on the internet) which when loaded from the network and executed ... not only displayed a fullscreen xmas message ... but also sent itself to all your friends
https://www.garlic.com/~lynn/2007u.html#87 CompUSA to Close after Jan. 1st 2008
https://www.garlic.com/~lynn/2008c.html#2 folklore indeed
https://www.garlic.com/~lynn/2008d.html#58 Linux zSeries questions
https://www.garlic.com/~lynn/2008g.html#26 CA ESD files Options

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

How can companies decrease power consumption of their IT infrastructure?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: How can companies decrease power consumption of their IT infrastructure?
Blog: Information Storage
Date: Thu, 01 May 2008
90s saw killer micro syndrome with throwing hardware at problems ... in lieu of scarce/expensive skills to understand/analyze problems. Going on two decades of the approach has resulted in large deployments of significantly underutilized hardware.

current buzzword is the 40+ yr old technology, virtualization ... which offers opportunity for significant consolidation w/o requiring significant additional skills/resources ... some scenarios of institutions going from 30,000 servers to 3,000 and similar reduction in datacenters.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had done original virtual machine implementation, cp40 on 360/40 with custom modified virtual memory hardware. cp40 morphed into cp67 when standard 360/67 with virtual memory hardware became available. last week of jan68, 3 people came out from the science center to install cp67 at the univ.

for the fun of it ... recent news item

Banks turning to virtualisation
http://www.finextra.com/fullstory.asp?id=18404

it is the magic pill that will fix whatever ails you

we had been working on ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

scale-up ... as referenced in these old emails
https://www.garlic.com/~lynn/lhwemail.html#medusa

which also was compacting more units in rack ... making them thinner ... and also starting air-flow, heat removal problems. next generation was smaller components with only half-wide rack units. The next generation (after that) got even smaller ... but rather than mount four-wide horizontal, they were mounted vertically in the rack and "blades" were born. This was still packing more and more computing in smaller physical area.

As mentioned in this old post
https://www.garlic.com/~lynn/95.html#13

the related ha/cmp scale-up activity was redirected into purely numerical intensive operation (rather than just general computing). lots of other vendors also got into this market ... some of which somewhat was associated with the "GRID" buzzword. It was still packing more & more computing into smaller & smaller space.

Later as the numerical intensive market matured ... vendors started looking to leverage all the technology (back) into wider market. One of the pitches to the wider market was (physical space) server consolidation leveraging the GRID and BLADE technologies.

It was really the marrying of virtualization with grid/blade for server consolidation (of several generations of underutilized machines) that the "green" theme really came into its own (significantly fewer servers ... as opposed to just the same number of servers in a much smaller footprint).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Whitehouse Emails Were Lost Due to "Upgrade"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Whitehouse Emails Were Lost Due to "Upgrade"
Newsgroups: alt.folklore.computers
Date: Fri, 02 May 2008 15:11:26
Whitehouse Emails Were Lost Due to "Upgrade"
http://news.slashdot.org/news/08/04/30/1359209.shtml

and

The case of the missing e-mail
http://arstechnica.com/articles/culture/bush-lost-e-mails.ars

and for some more whitehouse email from 25yrs or so ago (that weren't lost)
http://www.cnn.com/SPECIALS/cold.war/episodes/18/archive/

from nearly the start in the 70s, I had been quite rabid about backups and backups of backups and backups of backups of the backups. There has been speculation that orientation carried over to PROFS deployments. In any case, that supposedly was major factor in the above reference.

somebody told me in the early 90s that similar email systems had been deployed at numerous gov. agencies.

during the period I was getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
and working on system/r (original relational/sql implementation):
https://www.garlic.com/~lynn/submain.html#systemr
and doing an internal "sjr/vm" distribution ... a recent refs:
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
with this old email:
https://www.garlic.com/~lynn/2006u.html#email800501

I had also implemented what I called CMSBACK ... some old email refs
https://www.garlic.com/~lynn/lhwemail.html#cmsback
and
https://www.garlic.com/~lynn/2006t.html#email791025
https://www.garlic.com/~lynn/2006w.html#email801211

which was deployed internally at several internal locations ... including the internal (vm370-based) HONE systems that provided world-wide sales & marketing support
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past posts mentioning backup and/or archive
https://www.garlic.com/~lynn/submain.html#backup

CMSBACK went thru several internal releases and then a morph for a customer release under the product name workstation datasave facility. The product name then morphed into ADSM ... and then the name morphed again and is currently sold as TSM (tivoli storage manager).

current Tivoli storage manager reference:
http://www-306.ibm.com/software/tivoli/products/storage-mgr/

reference to virtual machine use in the gov. even much earlier:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

as an undergraduate in the 60s, i did a lot of system enhancements that were picked up and shipped in the product. i even got requests from ibm for some specific changes.

many years later ... having learned about some of the customers, i interpreted some of the change requests as of a security nature and possibly have originated from some such gov. agency.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 03 May 2008 08:00:06
Walter Bushell <proto@xxx.com> writes:
Now I was thinking of a plan to paint my whiskers green and then always carry so big a fan that they could not be seen.

Wouldn't it be better to use an OS that allowed large amounts of I/O to start with? You cannot do more I/O than the hardware allows and virtualization just puts in another level of overhead.


non-hypervisor systems tend to more easily get bloated with large amounts of overhead ... with processing becoming bottleneck.

hypervisor systems tend to more frequently be under large amounts of pressure to constantly concentrate on overhead ... possibly of the constant "with" & "without" thruput comparisons. hypervisors also tend to be a lot less complex and frequently able to better maintain a KISS philosophy (helping/contributing to lower overhead).

some of the virtual appliance type of efforts ... have realized that if they aren't expected to constantly be all things to everybody ... they can be stripped down for much more efficient operation.

the net result is that a combination of KISS above the hypervisor layer and KISS in the hypervisor layer ... results in an extremely lean & mean fighting machine ... much more efficient than single, large bloated monolithic systems ... and in aggregate, still able to provide all the comperable capabilities.

I was able to do this several times in the 70s, demonstrating that a hypervisor oriented solution had significantly higher thruput than comparable non-hypervisor solutions

relatively recent articles/references about increasingly inefficient bloated monolithic systems and/or hypervisors being touted as solution
https://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation
https://www.garlic.com/~lynn/2007o.html#3 Hypervisors May Replace Operating Systems As King Of The Data Center
https://www.garlic.com/~lynn/2007q.html#49 Slimmed Down Windows Offers Glimpse Into Microsoft's Virtualization Ambitions
https://www.garlic.com/~lynn/2008e.html#11 Kernels

past posts mentioning virtual appliance:
https://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
https://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
https://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ?
https://www.garlic.com/~lynn/2006x.html#8 vmshare
https://www.garlic.com/~lynn/2007i.html#36 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
https://www.garlic.com/~lynn/2007k.html#48 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted
https://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007q.html#25 VMware: New King Of The Data Center?
https://www.garlic.com/~lynn/2007s.html#4 Why do we think virtualization is new?
https://www.garlic.com/~lynn/2007s.html#26 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007s.html#35 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007u.html#39 New, 40+ yr old, direction in operating systems
https://www.garlic.com/~lynn/2007u.html#41 New, 40+ yr old, direction in operating systems
https://www.garlic.com/~lynn/2007u.html#81 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007v.html#75 virtual appliance
https://www.garlic.com/~lynn/2007v.html#80 software preservation volunteers ( was Re: LINC-8 Front Panel Questions)
https://www.garlic.com/~lynn/2008.html#59 old internal network references
https://www.garlic.com/~lynn/2008b.html#39 folklore indeed
https://www.garlic.com/~lynn/2008b.html#52 China's Godson-2 processor takes center stage
https://www.garlic.com/~lynn/2008c.html#2 folklore indeed
https://www.garlic.com/~lynn/2008c.html#55 Kernels
https://www.garlic.com/~lynn/2008e.html#11 Kernels
https://www.garlic.com/~lynn/2008e.html#15 Kernels
https://www.garlic.com/~lynn/2008g.html#6 It's Too Darn Hot

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Sat, 03 May 2008 08:52:17
one of the references in the wiki entry for subprime

Understanding the Subprime Mortgage Crisis
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1020396

abstract from above:
Using loan-level data, we analyze the quality of subprime mortgage loans by adjusting their performance for differences in borrower characteristics, loan characteristics, and house price appreciation since origination. We find that the quality of loans deteriorated for six consecutive years before the crisis and that securitizers were, to some extent, aware of it. We provide evidence that the rise and fall of the subprime mortgage market follows a classic lending boom-bust scenario, in which unsustainable growth leads to the collapse of the market. Problems could have been detected long before the crisis, but they were masked by high house price appreciation between 2003 and 2005.

... snip ...

the wiki article mentions that traditionally subprime applied to borrowers that had bad credit and didn't meet normal standards and/or property that didn't meet normal standards.

however, a lot of the latest round of toxic/distressed subprime mortgages were adjustable rate mortgages with very low entry teaser rate, interest only payment, possibly lack of documentation; etc. futhermore, wiki cites a reference that in this round, 61% of these subprime mortages went to people that actually had good (qualifying) credit ratings.

so there seems to be combination of factors feeding off each other.

mortgage originators could use toxic CDOs to immediately unload the mortgages ... as a result they no longer had to be concerned about the quality of the loans they were making (toxic CDOs had been used two decades ago in the S&L crisis, because they could be used to obfuscate the underlying quality/value).

possibly a huge number of speculators (61percent?) ... heavily leveraged using the low teaser rates and interest only payments ... to acquire as many properties as possible ... figuring that they would unload before the teaser period ended (anticipating huge profit).

large number of speculators would create the artificial impression of much larger demand than actually existed ... since a large number of "unsold" properties would temporarily disappear (I've mentioned before there is some analogy to hording ... somewhat akin to the recent run on "rice"). With artificially low "unsold" properties ... builders would be motivated to increase the production.

So one could claim that while the category of subprime loans had traditionally been either borrowers with bad credit rating or bad property ... that in this round ... the adjustable rate, interest only, extremely low teaser rate mortgages (packaged for "suprime" borrowers) ... were heavily leveraged by large number of speculators (with otherwise good credit rating). The artificially inflated demand (by speculators) would result in over production ... which will take some time to settle out after the inevitable "bust".

The use of toxic CDOs to (obfuscate the underlying quality/value and) unload the mortgages were acquired by investment bankers ... in a interative process where they became heavily leaveraged (with possibly only 1-2 percent actual capital).

In effect, the heavily leveraged toxic CDOs resulted in the inevitable bust to propagate out into much wider community (instead of being limited to the original mortgage originators). The use of toxic CDOs also allowed the mortgage originators to significantly increase the number of such mortgages they could write ... and be able to continue writing such mortgages over longer period of time.

misc. recent past posts mentioning write-downs:
https://www.garlic.com/~lynn/2008.html#90 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008f.html#15 independent appraisers
https://www.garlic.com/~lynn/2008f.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#13 independent appraisers
https://www.garlic.com/~lynn/2008g.html#20 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#57 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#0 independent appraisers
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#32 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#42 The Return of Ada

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Sat, 03 May 2008 10:06:19
jmfbah <jmfbahciv@aol> writes:
I don't think this practice is stopping. Now there are lots of ads about people with low incomes being able to buy the houses other people left. So this crookedness is still going on. Our law system doesn't deal rapidly with this kind of advertising. White collar crime is dealt with sometimes a decade after it happens. When a latest flavor of the year crime occurs, the consequences will affect the economy no matter what the criminal and/or civil legal system does. This is that Boyd thing..I don't know if having a method of rapid response is healthy for an economic or banking system. My gut says no but having these ads go on the air just because someone pays for them doesn't sound right either.

re:
https://www.garlic.com/~lynn/2008h.html#48 subprime write-down sweepstakes

full Boyd OODA-loops are to be very quick and directly related (and understood).

Toxic CDOs allowed actual value to be unhooked from the underlying value. Obfuscation is part of the "fog of war" ... aka it allows you to get away with things that aren't otherwise possible (used two decades ago in the S&L crisis).

The issue of "too quick" ... is typically when it becomes a knee-jerk and the "orientation" part of the OODA-loop is bypassed ... i.e. just observe, decide and act ... w/o orientation and/or understanding.

I've mentioned before one of the post-mortems of the S&L crisis was that in a highly regulated, static environment, it is possible to accumulate a lot of people in authority that don't understand what they are doing ... just going through learned motions by rote.

W/o understanding ... you are left with trial&error in attempting to adapt to changing conditions. When there is a lot at risk ... then it is likely that it will be a very conservative trial&error activity ... again because of the lack of understanding.

for other topic drift ... a large number of the "subprime" were actually subprime in another sense. they weren't actually subprime in the sense that the majority of the borrowers had bad credit rating. nominally the extremely low teaser rate for adjustable rate mortgages ... was that the money would eventually be recovered over the life of the (adjustable) loan. In the case of the "non-subprime" speculators, they were getting a short term loan, way below prime rate ... which they were anticipating unloading before any rate adjustment.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers
Date: Sat, 03 May 2008 12:30:09
greymaus <greymausg@mail.com> writes:
That would be the old mainframe [forgot the word] idea, that a lot of non-time-critical stuff was run overnight?. Is that possible with modern (?, yes, I know) systems?.

there is a lot of online transactions introduced starting at least in the late 70s ... especially in the financial industry. however, in large number of cases these were just front-end operations. The actual operation still continued to be run in the overnight batch window.

I've pontificated before about how the overnight batch window started to become a real bottleneck for a lot of operations in the 90s. There were billions spent on (frequently failed) re-engineering projects in that period to leverage killer micros and object, distributed programming ... attempting to implement straight-through processing, eliminating the overnight batch window.

A large part of the failed re-engineering projects was when they found (frequently very late in the project) that the distributed object technology resulted in two orders of magnitude overhead increases (compared to batch cobol) ... totally obliterating any hopes of throughput and performance improvements.

misc. past posts mentioning overnight batch window
https://www.garlic.com/~lynn/2004.html#51 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object
https://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007u.html#19 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#37 folklore indeed
https://www.garlic.com/~lynn/2007u.html#44 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#61 folklore indeed
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#27 folklore indeed
https://www.garlic.com/~lynn/2007v.html#64 folklore indeed
https://www.garlic.com/~lynn/2007v.html#69 Controlling COBOL DDs named SYSOUT
https://www.garlic.com/~lynn/2007v.html#72 whats the world going to do when all the baby boomers retire
https://www.garlic.com/~lynn/2007v.html#81 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Sat, 03 May 2008 12:59:19
re:
https://www.garlic.com/~lynn/2008h.html#48 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#49 subprime write-down sweepstakes

a lot of the publicity regarding the whole package of subprime mortgages was supposedly to allow "first-time" buyers w/o any credit history to become home owners. The loans had special introductory period, no down payment, very low (initial) interest rate, very low payments during the initial period (combination of interest only payment and very low intesest rate), etc. This initial period would allow these first-time buyers with no credit history to establish "credit".

however, a lot of speculators swooped in to take advantage of the program (possibly the majority taking part were purely speculators and weren't first-time, no-credit-history, owner-occupied mortgages, aka the 61percent number).

as mentioned before, the mortgage originators no longer had to care about mortgage quality, being able to immediately unload the loans as CDOs ... resulted in the only thing they were going to be measured on was how many mortgages they could write (w/o regard to mortgage quality). One of the things that apparently happened was the size of the speculator market (and therefor the number of mortgages they could write) was possibly twice as large as the originally intended no-credit-history, owner-occupied, first-time buyer market.

so drifting back over to the Boyd OODA-loop metaphor ... lots of past posts mentioning boyd and/or OODA-loop
https://www.garlic.com/~lynn/subboyd.html#boyd

there is a lot of flavor of many people not actually understanding what it was they were doing (missing the ORIENTATION and/or UNDERSTANDING).

so relating it to some of my past (compter) activity ... that contributed to my interest in Boyd when I first met him ... was the work i did as an undergraduate in the 60s on dynamic, adaptive resource management.

I've mentioned numerous times that it seem that the state of the art for much of the 60s, 70s, and even well into the 80s was effectively "witch doctor" performance tuning knobs. There was very low correlation between changes in the performance tuning knobs and resulting performance. This could be attributed to there being a lack of understanding about performance in general ... and low connectivity between the control knobs and the things they were suppose to control.

I've claimed that in order to do dynamic, adaptive (automated) resource management required 1) understanding performance and 2) implementing control mechanisms directly connected to the things they were intended to control.

So the FED prime rate supposedly has some controls over the economy by increasing or decreasing the borrowing rate (i.e. loans are either more attractive or less attractive based on the borrowing rate). I've claimed that the "subprime" mortgages were being offered way below and/or unrelated to the prime rate and speculators took advantage of these very attractive loans (unrelated to the fed prime rate) ... large number of loans outside the influence of the FED's control.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 03 May 2008 14:09:25
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:

http://www.ridetheducksofseattle.com/


for completely other seattle drift ... the car ferry in the early '80s "war games" movie, shows up on lake washington converted for tourist operation operating out of kirkland ... tour includes a very slow transversely past a lake washington compound (strongly associated with m'soft).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Why 'pop' and not 'pull' the complementary action to 'push' for a stack

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why 'pop' and not 'pull' the complementary action to 'push' for a  stack
Newsgroups: alt.folklore.computers
Date: Sat, 03 May 2008 19:08:00
Mensanator <mensanator@aol.com> writes:
Perhaps from usage of their mechanical equivalents. Cafeteria plates and rifle cartrideges are often loaded in bulk (pushed) but are removed individually (popped). It would sound to me (if pull were used) that one is removing a bunch at once, counter to how they are actually used.

Of, course, in real computer stacks, there isn't the same many in/single out situation, but metaphors aren't allways exact.


pop'ed also is somewhat more of a "released" analogy ... things are pushed into the stack ... somewhat the cartridge magazine metaphor and then released/poped one at a time (the compression spring in the magazine needs to be pushed ... but removal doesn't require a pull, cartridge just have to be released to be removed).

for some more computer folklore ... when charlie had invented compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

as part of his work on cp67 multiprocessing fine-grain locking ... including the idea that atomic updates could be done w/o having to perform separate lock/unlock ... initially there was quiet a bit of resistance to adding the instruction to 370.

the claim was that "test&set" was perfectly adequate for 370 multiprocessing operation. in order to get compare&swap justified for 370 ... it would be necessary to come up with uses other than explicit multiprocessor locking operations.

as a result, the examples of multi-threaded application code doing atomic updates (especially application code that would be otherwise enabled for general system interrupts ... creating possible asynchronous conflicts between the different threads in the same code). One of the example application uses was atomic updates of push/pop stack. The examples were included in the 370 principles of operation

>From recent principles of operation ... multiprogramming (aka mainframe for multi-thread) and multiprocessing examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320

The above includes various examples using compare&swap instruction (including "free-pool" manipluation for storage allocation)

In the same time-frame that charlie was inventing compare&swap instruction for cp67 multiprocessor operation, there was work on cp67 kernel storage allocation. The initial cp67 implementation kept a list of all storage locations ordered by address ... and returning storage to the pool might include combining contiguous storage locations (simplified by keeping the list ordered by storage address).

Larger systems might have several hundred or even thousands of elements on the list ... and as other parts of the kernel had heavy pathlength optimization ... kernel storage management was becoming a significant percentage of total kernel pathlength. There was a special RPQ instruction for the 360/67 (from lincoln labs) called SLT ... or search list. This could speed up the processor cycles per element searched (compared to traditional 360 multiple instruction) ... but still could easily amount to several hundred/thousand processor cycles.

As part of investigating optimizing kernel storage management ... kernel storage "subpools" were invented. There were a dozen or so different, frequently used, "small" storage block sizes. Releasing/returning a storage block ... would first do a check if it was a subpool block size, and then index the header/anchor for that block size ... and "push" the block into that pool (eventually on 370 using compare&swap instruction). For a request for a new storage block, a check would be made if the size request was for a "subpool" size ... and if so, check for "popping" a block off the corresponding subpool header. If that particular subpool was empty ... then it would fall back to the original list searching implementation. A running system would start off with nothing in the subpools ... but as storage was obtained and then released ... the amount of subpool storage would increase.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 04 May 2008 13:36:38
Mark Crispin <mrc@Washington.EDU> writes:
Screaming outrage (exact quote: "Well, I hope that you are proud of yourself") when, after hitting a deer on the road with their fuel-injected Ultimate Driving Machine and calling the cops, the cop finds that the deer is still alive and dispatch it with their sidearm.

there use to be jokes in the early 60s when boeing(seattle) was hiring a lot of brand new engineers from the east coast ... they would buy a pickup with gun rack and go out deer hunting ... quite proud of themselves when they bagged some number of farmers' cows.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

independent appraisers

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: independent appraisers
Newsgroups: alt.folklore.computers
Date: Mon, 05 May 2008 09:28:39
re:
https://www.garlic.com/~lynn/2008g.html#13 independent appraisers

Dollar Reserve Status Is Tale of Fading Glory
http://www.bloomberg.com/apps/news?pid=20601039&sid=asyzv2fq7NnA&refer=home

from above:
Many countries -- including China, Russia, Kuwait, Singapore and Norway -- are transferring tens of billions of dollars to sovereign wealth funds. Long-term investors with mandates to maximize returns, these entities owe no allegiance to the U.S. currency and over time their investments will probably result in their governments' holding fewer dollars.

... and
It isn't ordained that the dollar surrender its position as the world's go-to currency. Yet if Americans insist on living beyond their means, eschew sound fiscal policies, ignore the greenback's weakness and remain tempted by protectionism, the dollar will in small bites begin to mimic the British pound -- the currency of a once proud but spent imperial power.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Long running Batch programs keep IMS databases offline

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Long running Batch programs keep IMS databases offline
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 May 2008 00:29:05
"F" <f@hotmail.com> writes:
We have IMS 9 on z/OS and I am fairly new to the platform and have a vested interest in fixing it. Every night we have batch programs that run which in return keeps our databases offline for a long time and as a result our applications are not available for processing.

I want to know why batch programs and databases cannot both be online at the same time ? If the batch programs read the databases, then why are they offline ?

Anyways, what are some ways of ensuring that batch jobs and databases can both run and be online at the same time ?


some recent discussions about overnight batch window ... which requires exclusive access to all the information ... as opposed to "online".
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#50 Microsoft versus Digital Equipment Corporation

there were some number of efforts in the 90s (billions of dollars) that looked at business process re-engineering to leverage killer micros and distributed object-oriented technology to implement straight-through processing (eliminating the overnight batch window). It turns out that many of these had grandious failures when nobody bothered to do any speeds&feeds until very late in the effort ... frequently belatedly discovering that the distributed object-oriented technology had a factor of 100 times increase in overhead (compared to the typical Cobol batch implementation), totally obliterating any hopes of throughput improvements.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

our Barb: WWII

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: our Barb: WWII
Newsgroups: alt.folklore.computers
Date: Tue, 06 May 2008 22:20:44
krw <krw@att.bizzzzzzzzzz> writes:
That was my plan. I was 54 when I left Big Blue, after 32+ years. Not really any point in staying anymore (the fact that I also got a half-years severance was a nice bonus). My pension was pretty well frozen and accessible once I past the 30 year mark. It was pretty clear that their plan wasn't to keep people past 30 years.

one of the threads on "the greater ibm connection" social site

https://www.xing.com/net/greaterIBM

is how to leverage former/retired ibm'ers. one of the suggested programs has to do with the fed. gov. need for people/advistors because of the huge numbers of baby boomers retiring.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Wed, 07 May 2008 14:00:24
CBFalconer <cbfalconer@yahoo.com> writes:
I suspect Microsoft.

there was once an article that claimed that microsoft was a real estate project ... that supposedly more money has been made off selling homes to micrsoft employees than has been paid to them in salaries ... aka micrsoft company was purely a fabrication to get a lot of people to move to the seattle area.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Wed, 07 May 2008 20:37:58
scott@slp53.sl.home (Scott Lurndal) writes:
In any case, his assertion is only true in the sense that the mainframe would use a single CCW(IBM) or I/O Descriptor (Burroughs) to issue a disk read, while the current modern FC and SPI drivers need to twiddle memory mapped I/O registers. However, the SCB that actually is transported to the target is very similar to a CCW/IO descriptor, and the DMA doesn't involve CPU accesses, and modern bandwidth to memory is huge (6.4Gbytes/sec for AMD Opterons, e.g.). The point-to-point interconnection schemes now used by AMD and soon used by Intel also ameliorate the impact on memory bandwidth by adding multiple memory controllers and dedicated links between system elements.

CKD (DASD) disks in the 60s, were a trade-off between I/O resources and scarce, expensive electronic storage.

indexes and descriptors were on disk. "multi-track" search was used to scan index on disk looking for specific entry. since there was scarce real-storage ... the search argument was (re)fetched from memory for comparison for each index entry encountered. this created enormous load on controllers, channel, memory bus, etc ... in order to save a few bytes of electronic storage out in the i/o infrastructure.

once the specific index entry was found ... it would be read ... which would have the pointer to the desired information (file, member entry, etc).

then the retrieved information (from the index) would be used to position the arm to the physical disk position ... channel program on the order of:


Seek              arm position
  Search            match specific record
tic *-8           repeat the search if record doesn't match
  read/write        read/write if search matches

for actual read/write a specific record, a search is also required to scan for correct record ... but this time, the index has provided the specific surface location ... so at most it only is search/compare of records for a single revolution.

the load on memory bus, channel, and controller were so enormous ... that "set sector" channel command was introduced in 370s. 3330 had 20 disk surfaces ... but only 19 were addressable. the 20th surface contained rotational positioning information. for somewhat regularly organized file data ... it was sometimes possible to calculate the approximate rotational position of a record location. the 370 channel programs for read/write specific record became


seek
  set sector
search
tic   *-8
read/write

the "set sector" transfored a rotational position to the disk and allowed it to asynchronous search for match for that location ... and when it reached that position ... it would resume the channel program. done well, it would mean that the search operation would be successful on the next record the disk encountered.

However, for the indexes ... where it wasn't known the location of the desired index entry ... there would still be an enormous load on the controller, channel, memory bus, etc.


seek
  multi-track search
tic   *-8
read

in the past (late 70s & early 80s) i've been called into customer shops to diagnose enormous performance bottlenecks. turns out they had 3330s with "three cylinder" index directory. on the avg. loading of an application, first required a search of the index directory (for every load, no information was cached). an avg. search of three cylinder directory would take 1.5 cylinders. That is an initial multi-track search operation on the first cylinder of the index directory. At 19 tracks/cylinder ... and 3330s spinning at 3600rpm ... that is a single operation that takes 19/60 of a sec. ... which ties up the disk, controller, channel and places heavy load on the memory bus. On the avg. it reaches the end of the first cylinder and completes unsuccessfully. It is then restarted on the second cylinder of the index/directory. On the avg. it finds a match after reading half the 2nd cylinder ... or a operation than takes on the avg. 19/120 of a second.

For each program/application load ... there was first two disk i/os that, combined, took approx. 1/2 second elapsed time ... just to find the disk location of the program/application.

lots of past posts discussion ckd dasd ... and the 60s era i/o resource vis-a-vis real storage trade-off
https://www.garlic.com/~lynn/submain.html#dasd

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Two views of Microkernels (Re: Kernels

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two views of Microkernels (Re: Kernels
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Fri, 09 May 2008 09:09:31
Quadibloc <jsavard@ecn.ab.ca> writes:
It's true there was a lot of rediscovery and reimplementation with the IBM PC.

The reason for this, though, is clear enough.

We didn't have a continuous line of development from systems of comparable size and complexity to microcomputer systems because...


there is story that boca had agreed to let an (internal) group on the west coast do software for the machine. it was staffing and also every month, double check with boca that boca still didn't want to do software and the group on the west coast would have the responsibility. after nearly a year of this ... boca changed its mind ... it wanted to have "responsibility" for software ... even if that met subcontracting software to outside companies (where there wouldn't be some other internal group appearing to be competitive).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Up, Up, ... and Gone?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Up, Up, ... and Gone?
Date: May 8th, 2008 at 3:03 am
Blog: Boyd
misc. past posts mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

about 15 yrs ago i was on a flight first thing in the morning from san jose to chicago getting a connection to boston. the flt was delayed on the ground in san jose for about an hr, because of a storm blowing past ohare. the storm took an hr to transit ohare during which time the traffic rate into/out of ohare was cut in half (take off/landings were cut in half during peak morning period, 6am in sanjose, 8am in chicago).

should have gotten into chicago about 11am local time with a connecting flt to boston about noon. actually got into chicago about noon, and connecting flt had been delayed to almost five. the problem was that the infrastructure was so highly tuned and static/rigid ... that there was no adaptibility to even small gliths. what had started out being half peak capacity reduction for an hr first thing in the morning ... the effects should have dissipated later in the marning when they would have been able to peak up the additional/delayed flts (during nominal much lighter traffic periods). at worst, there would have been a time-shift of possibly 30mins as the delays continued to roll forward during the rest of the day (i.e. half of an hr's worth of delayed take off/landings not being able to be absorbed).

however, because of the highly static, rigid optimization ... instead of the glitch dissipating by noon, it had actually amplified; from about 30minutes worth of affected traffic to 4hs ... and getting worse.

my comment at the time was that the only way the system appears to recover from even minor glitches is the overnight quiet period when all activity drops to zero and can reset.

past posts mentioning the flt:
https://www.garlic.com/~lynn/2000b.html#73 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2003o.html#27 When nerds were nerds
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?

federal air traffic control had actually instituted an improvement that planes couldn't take off w/o a guaranteed landing slot at the destination (previously planes would take-off and then circle at the destination because of the saturation of landing slots). the problem is that there is little provision by the carriers to be able to adapt schedules to the "fog of war" ... aka unplanned for contingencies.

about a decade ago we were called into one of the large airline reservation systems. they listed the ten major things that it was impossible for them to do in routes and wanted us to study it (routes is the part where the agent/system looks at all possible ways of getting passenger from origin to the destination and represented about 25percent of total computer system activity). We also talked about *fares* (pricing of different ways of getting from origin to destination) and the actual reservations of a seat. i was given a complete copy of the OAG schedule (take-off and landings of all scheduled flts in the world).

Two months later I came back and demo'ed an implementation of routes that handled all ten *impossible* things that they wanted to do (but couldn't). Then the hand-wringing started. Eventually after almost a year ... one of the execs said that they hadn't actually planned that we fix all ten *impossible* things ... they just wanted to tell the board that we would be studying them for the next five yrs (somewhere along the way, I started commenting, be careful what you ask for).

A major issue was that in their existing implementation paradigm ... there was almost 1000 people handling manual tasks (and extremely highly paid people). The ten *impossible* things were somewhat a side-effect of having so many things actually being handled manually. I had changed the paradigm (in a completely different way of doing things), completely eliminating all those manual activities ... and then it became straight-forward to implement the ten *impossible* things.

this is me pontificating ...

this is somewhat related to Boyd's comments in briefings about Guderian's verbal orders only ... designed to encourage local, independent action during the Blitzkrieg

i've commented for yrs that at moderate highway traffic loading, certain *unpolite* activity by less than 1percent of the drivers can precipitate rapid change from free flowing traffic to stop-and-go.

there are hundreds of millions of things involved in national air traffic system that at very light load loading levels (say under 10-15 precent) are only moderately coupled and slight glitches can be absorbed relatively easily. as the loading on the infrastructure increases (planes, landing/take-off slots, gates, ground crew, gate crew, pilots, cabin crew, fueling, equipment; hundreds of millions of *things*), the coupling starts to stiffen. for purely local operation, having humans in the adaptability loop, can help absorb/adapt changes/glitches.

the problem as the loading further increases, the coupling between the different parts further stiffens across the whole infrastructure. humans, in the adaptability loop, can no longer take make real-time adjustments of the millions of interconnecting things. at this point, the effect of glitches in one part of the system ... rather than dissipating over short period ... begins to be amplified, rippling out through the whole infrastructure. there is no longer anything (say, equivalent to shock absorbers) that can isolate what happens in one part of the infrastructure from other parts of the infrastructure (and humans in the loop aren't able to perform real-time adaptation of the hundreds of millions of factors involved). imagine that instead of weak springs coupling the hundreds of millions of pieces, the springs becoming stiff rods and any glitch anywhere in the system becomes amplified through the whole system.

past posts mentioning routes rewrite:
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001d.html#74 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
https://www.garlic.com/~lynn/2007j.html#28 Even worse than UNIX

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 10 May 2008 02:56:54
krw <krw@att.bizzzzzzzzzz> writes:
Actually, serial is faster in most cases. Parallel suffers from signal to signal skew. Even disk drives are going serial.

9333s (done in hursley) ... we used them in ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

configurations almost 20yrs ago; serial copper, 80mbits/sec, encapsulated scsi commands ... it then morphed into SSA ... old post
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15

later (decade old) article somewhat along the lines of above (something we had wanted to work on
http://www.eetimes.com/news/97/936news/merge.html

along with other cluster scale-up issues (before we left)
https://www.garlic.com/~lynn/lhwemail.html#medusa

quicky search engine turns up this standards reference:
http://www.t10.org/scsi-3.htm

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

how can a hierarchical mindset really ficilitate inclusive and empowered organization

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: how can a hierarchical mindset really ficilitate inclusive and empowered organization
Date: May 10, 2008
Blog: Organizational Development
As Boyd developed his briefing on Organic Design for Command and Control ... one of the examples he would use was training army officers received in WW2. The problem was that the country had to mobilize and deploy huge number of people with little or no training. A rigid, top-down, command & control system was created to leverage the limited experienced and skilled resources available. The problem was much later, as these young men started to come of age as executives ... they were falling back on their early command&control training that used rigid, top-down, command and control system to deal with (assumed) large numbers of people with no skill and no training.

Boyd would contrast this with Guderian's verbal orders only before the Blitzkrieg ... with the objective of encouraging local, independent action. This was more aligned with high level broad strategic direction, allowing local, skilled resources as much tactical freedom as possible.

The issue wasn't so much generation thing ... but the overall skill/experience level of the organization and the amount of direction required.

Lots of past posts mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
and various URLs mentioning boyd and/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html#boyd2

another kind of analogy is the current tribulations with chips. for a long time processors chips have been single (rigid) synchronized clock, serializing everything going on in the chip. as chips got larger and more complex, the synchronized, serialized, global clock was starting to become a throughput and efficiency bottleneck. The transition to multiple cores for independent operations represents a significant challenge to many applications ... how to have non-serialized, multiple independent operations working in a coordinated manner on a common task.

and to give away the bottom line in Boyd's briefing Organic Design for Command and Control ... really refers to appreciation and leadership.
http://www.d-n-i.net/boyd/organic_design.ppt

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Sat, 10 May 2008 10:34:41
Charlton Wilbur <cwilbur@chromatico.net> writes:
Cryptography? DES was invented in 1975. Public-key cryptography was first described in 1976. RSA was invented in 1977.

Relational databases were first written about in 1970, and didn't really become popular for another decade because of politics inside IBM.

Ubiquitous networking (look up "ubiquitous" in a dictionary, please) didn't show up until the mid-1990s.

Graphical user interfaces were first conceived of in the 1960s, but I'm not aware of a mouse, trackball, or light pen in widespread use before 1980.

And in my world, 1970, 1976, 1975, 1977, 1980, and 1990 are all *after* 1968.


some old public key email ref ... near 70s:
https://www.garlic.com/~lynn/lhwemail.html#crypto

relational was defined around 70. lots of past posts about relational and original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

however, first to ship was multics MRDS ... old discussion with some MRDS (& relational) refs:
https://www.garlic.com/~lynn/2005.html#30 Network databases

multics was done on 5th flr, 545 tech sq.

virtual machines, internal network, markup languages, and bunch of other stuff was done 4th flr, 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

there was a little bit of ubiquitous with big explosion of 43xx machines all over the world on the internal network ... which was larger than the arpanet/internet from just about the beginning until sometime mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

A big difference was a shift in the kinds of of nodes on the arpanet/internet ... including workstations and PCs. Because of various internal political forces, workstations & pcs were forced to be treated as terminal emulation on the internal network.

while i've had online access at home (terminal) since mar, 1970 ... I admit to having difficulty on trips to Paris in the 70s ... reading my email back in the states.

system/r was done at sjr, bldg 28 on (virtual machine) vm370 in the 70s.

i've mentioned before an interaction that went on between the "60s" database technology in stl (bldg. 90) and the system/r group (bldg. 28) in late 70s. The stl people claimed that relational doubled the disk space (for the key indexes built "under the covers" by relational) and significantly increased the disk I/Os (to step thru the different levels of index to get to the pointer to the actual desired record). The "60s" databases avoided this overhead by exposing explicit record pointers as part of the database metaphor.

The relational people countered that exposing record pointers as part of the database metaphor ... significantly increased the human effort/skill to deal with, administer, and manage "60s" databases.

In the 80s, the rapidly declining cost of bytes ($$/byte) mitigated the disk space issue for relational. Also, the increasing amounts of system real storage was allowing much of the relational index to be cached in real memory (mitigating the number of disk reads to transition the relational index). Overall system costs were coming down ... allowing computing to be applied to a lot of new, lower-cost-justified applications. At the same time people skills were becoming scarce and more expensive. All the stuff associated with 60s database were becoming more expensive and scarce. All the stuff associated with relational were becoming less expensive and more plentiful.

Early 80s, I also helped with the system/r technology transfer from bldg. 28 to endicott for sql/ds product. One of the people at meeting referenced in this old post
https://www.garlic.com/~lynn/95.html#13

claims to have handled much of the technology transfer from endicott back to stl (bldg. 90) for DB2 product.

Other old email ... when Jim left for Tandem, he foisted off on to me some number of database related activity ... including non-corporate relational interfaces ... like specific one with BofA on system/r
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Is a military model of leadership adequate to any company, as far as it based most on authority and discipline?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Is a military model of leadership adequate to any company, as far as it based most on authority and discipline?
Date: 5/10/08 8:20 AM
Blog: Management
There has been a lot written about the Marine Corps having adopted John Boyd for basic principles for running organizations ... both the Organic Design For Command and Control and his OODA-loop operations

A lot of Boyd's briefings to corporations were about how to effectively operate in any kind of competitive situation.

lots of past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. URLs from around the web mentioning Boyd and/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html#boyd2

OODA-loops embodies both operating smarter and faster.

I've used an example of US and foreign auto industry regarding the "tempo". Circa 1990, one of the large US auto makers had C4 effort to completely redo the atuo development process. The example was that after import quotas, the affected foreign auto makers determined that they could sell as many expensive cars and inexpensive cars. To do this they complete remade how they did car development ... including cutting better than half the traditional industry 7-8yrs elapsed time to develop a new car. This allowed them to quickly deploy a completely different product mix (which had significant higher profit & profit margin). Going forward, the approach allowed them to operate faster and smarter ... adapting products to changing market conditions and customer preferences.

C4 was an attempt to do something similar in the US auto industry ... including heavily leveraging technology in an attempt to reduce product development elapsed time from traditional 7-8yrs.

Boyd was possibly one of the Air Forces best pilots. at fighter pilot school he had an open invitation to all comers ... that he would give them the advantage position and within 40 seconds (i.e. "40 second Boyd"), he would (always) reverse the situation. Later as head of lightweight fighter design, he significantly revamped the F15 and F18 and was major force behind the F16. However, it is still considered that the Air Force disowned him. At his Arlington burial ceremony, it was the Marines that showed up in force, not the Air Force.

There was speculation that recent SECDEF honoring of Boyd resulted in lots of heartburn in the Air Force. His works all went to the Marine university (not the Air Force)
https://www.garlic.com/~lynn/2008h.html#21 To the horror of some in the Air Force

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 10 May 2008 16:24:56
Eric Smith <eric@brouhaha.com> writes:
That's no longer how it's done. Modern CPUs implement cache coherency protocols such as MESI or MOESI, and use write-back caches. When one CPU modifies cached data, it doesn't get copied to system memory immediately. Later if some other CPU needs that word of memory, it either gets it from the cache of the first CPU, or causes the first CPU to write it back to memory, then gets it from memory.

The CPUs all have the same "view" of physical memory, even though some changed data hasn't actually been written back to the memory.

This is a win because most memory locations that get written do not get accessed by another CPU, and it is more efficient to do a probe of another CPU's cache than a main memory access.


this was problem even with single cpu operation with split & non-coherent I(instruction) & D(data) (store into) caches. System function that provided "loader" function periodically would have to operate/change instructions brought in for execution. These alterations would appear in the data cache ... but would not necessarily be visible in real storage to instruction-cache. The loader application required an operation that would flush data cache lines back to real storage ... in order for them to be available to the instruction-cache ... for execution.

misc. past posts about 801/risc (dating back to mid-70s) where this was part of design
https://www.garlic.com/~lynn/subtopic.html#801

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Sun, 11 May 2008 10:54:38
Morten Reistad <first@last.name> writes:
Ethernet was a lot more important than history articles admit. Between 1983 and 1996 it spread everywhere, and when the Internet appeared in large, commercial setting in 1993-96 the local vehicle for transport was already installed.

one might claim part of the reason was that OSI model didn't include any provision for LANs and ISO had edict out that there wouldn't be standardization work on anything that didn't conform to OSI model. misc. past posts on subject
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

the new "Almaden" research factility in the mid-80s ... up the hill from the plant site (and the old sjr/bldg. 28 ... now torn down) was heavily wired with CAT4 and other technologies for token-ring, etc. However, they found that ethernet over CAT4 wiring had both lower latency and higher thruput than 16mbit T/R.

about the same time frame, we had come up with 3-tier networking model and were out pitching it to customer execs ... which included ethernet as integral part of the implementation (with comparisons with 16mbit T/R). we were taking lots of hits from both the SAA group and the T/R forces
https://www.garlic.com/~lynn/subnetwork.html#3tier

of course, internetworking protocol had no problems with quickly adopting ethernet.

part of the issue during the period was lots of govs & other institutions were mandating elimination of internet and replacement with OSI ... including fed. gov. GOSIP stuff.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Sun, 11 May 2008 14:57:48
there were a number of projects going on in the 70s ... not just system/r being implemented on vm370 platform in sjr/bld28 (where codd worked, as an aside ... also backus)
https://www.garlic.com/~lynn/submain.html#systemr

i've commented recently about the "difference" between the 60s database activities going on in stl (just 10miles south of bldg 28) and relational
https://www.garlic.com/~lynn/2008h.html#64 New test attempt

one of the "big" things that system/r & sql did was abstract away the "exposed" record pointer metaphor that was part of the infrastructure that database programmers, users, and administrators had to deal with. however, rdbms made some structural trade-offs with homogeneous rows & columns and primary index ... which matched up with high-value early adopter ... financial bank accounts.

Besides being involved in system/r implementation ... I also got involved in another kind of dbms implementation ... it had similar objective of system/r of abstracting/eliminating the explicit record pointer metaphor ... but w/o making the simplification trade-off of requiring homogeneous infrastructure (any item can potentially have arbitrary relations to any other item).

The comparison at the time was that the "under the covers" infrastructure indexes that eliminated the exposed record pointer metaphor doubled the physical disk space (compared to the 60s genre that relied on explicit, exposed record pointers). This more generalized implementation that allowed for arbitrary relations might increase disk space by factor of 5-10 times (for the under-the-covers implementation).

Going into the 80s, a big part of the relational uptake was the trade-off shift ... hardware cost reductions made a lot of applications economically feasable ... but people overhead/skill requirements for early generations of databases weren't available and/or weren't cost justified. relational reduced the cost/amount of increasingly scarce/expensive human resources for dbms ... and the (relational relative) increase in hardware resources was more than offset by decrease in hardware costs.

The more generalized implementation hadn't reached that trade-off threashold. In more recent years, I've redone implementation from scratch and have used it for real world information that has much more arbitrary structure. I used it for maintaining the information for the rfc indexes
https://www.garlic.com/~lynn/rfcietff.htm

and merged taxonomy and glossaries
https://www.garlic.com/~lynn/index.html#glosnote

I have various kinds of applications ... including the ones that generate the referenced HTML files. One of the things I've attempted to do is use HREFs to approximate the complexity of the more general, arbitrary relations. As a result, the files have an extremely high ratio of HREFs to file size. Also, I've conjectured that the major search engine webcrawlers may be using it as a regression test ... since I see approx. the same 1000 hits a day, everyday, from the same web crawlers.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Sun, 11 May 2008 19:32:02
krw <krw@att.bizzzzzzzzzz> writes:
Light pens and joy sticks were "standard equipment" on the 3277GAs and certainly in "widespread use" within IBM. We used for logic entry and simulation in the mid-late '70s through perhaps '90 (often on PCs under emulation).

as undergraduate ... i had lots of free/open access to cp67 and 360/67 at the univ (no vendor help or training, i had to figure everything out on my own). one of the things i had to deal with was handling "real-time" at something like 250 I/O operations per MIP ... while getting productive work done.

roll-forward to modern processors at say 50,000mips ... that would translate to something like 12.5 million "real-time" i/o operations per second. with additional processing power head-room of modern processors ... things that were extremely difficult in terms of instructions per operation have gotten quite a bit easier.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Mon, 12 May 2008 04:23:15
Anne & Lynn Wheeler <lynn@garlic.com> writes:
as undergraduate ... i had lots of free/open access to cp67 and 360/67 at the univ (no vendor help or training, i had to figure everything out on my own). one of the things i had to deal with was handling "real-time" at something like 250 I/O operations per MIP ... while getting productive work done.

roll-forward to modern processors at say 50,000mips ... that would translate to something like 12.5 million "real-time" i/o operations per second. with additional processing power head-room of modern processors ... things that were extremely difficult in terms of instructions per operation have gotten quite a bit easier.


re:
https://www.garlic.com/~lynn/2008h.html#69 New test attempt

this is old posting with piece of presentation that I had made at SHARE fall68 meeting in Atlantic City. It includes some of the pathlength performance optimization that I had done on the cp67 as an undergraduate between the time cp67 was installed at the unv. (last week jan68) and that summer.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

some amount of the presentation focused on improvement of running OS MFT14 operating system as a virtual guest under cp67. In this particular period, it represents reducing the overall reduction in cp67 processor time from 534 cpu seconds to 113 cpu seconds (for the particular workload) ... not quite five times improvement. Some pathlengths were, in fact, improved by nearly factor of 100 times.

work continued on various portions of cp67 pathlength for the next 5-6 yrs, including the work mentioned in this recent post on how kernel storage allocation worked
https://www.garlic.com/~lynn/2008h.html#53 Why 'pop' and not 'pull' the complementary action to 'push' for a stack

possibly by the time Grenoble Science Center had done the cp67 work on "working set dispatcher" (for their paper in cacm) and the comparison between cp67 dynamic adaptive page thrashing controls running on the cambridge system ... mentioned in this old communication
https://www.garlic.com/~lynn/2006w.html#email821019
in this post on global LRU replacement
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

... the pathlengths had possibly improved to the point to be able to handle nearly 500 I/O operations per MIP ... which might translate to 50 million "real-time" I/O operations per second on current day processor. This is even before 370, vm370 and the introduction of virtual machine microcode "assists" for improving hypervisor thruput.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Mainframe programming vs the Web

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe programming vs the Web
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 May 2008 05:28:04
martin_packer@UK.IBM.COM (Martin Packer) writes:
As a big Firefox fan (writing extensions and living on the BLEEDING edge by running Nightlies) I wonder if IE even HAS a way to be selective about it. Someone who's an IE fan can perhaps enlighten us.

one of the things i've done is using wget to fetch a list of (news, article oriented) websites, figure out the difference (since last seen) ... select the individual article URLs that are new/different ... then check the firefox sqlite history repository ... and get firefox to fetch the new/unseen URls in different tabs. A days worth might be 300-600 tabs. the objective is that it eliminates the latency associated with the traditional point&click on the web. once the URls are fetched ... it is purely local response ... and in the past year or so, firefox overhead has gotten significantly better handling hundreds of open tabs.

recent posts mentioning some of the glue for doing this
https://www.garlic.com/~lynn/2008b.html#32 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#35 Tap and faucet and spellcheckers

there is a little heuristic ... some news/article oriented sites have countermeasures for multiple hits from the same client address in short period of time ... i.e. the tab fetches have to be spaced out over time.

once the fetches start ... it is possible to be reading the tabs that are already local ... while remaining tabs are being fetched in the background.

for some mainframe related ... old post discussing application programming of emulated 3270 interface
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question

on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

circa 1981 (pre-PC, pre pc3270 terminal emulation, pre hllapi, etc).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

SSL certificates - from a customer's point of view (trust)

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 12, 2008
Subject: SSL certificates - from a customer's point of view (trust)
Blog: E-Commerce
we had been asked to come in and consult with a small client/server startup that wanted to do payments on their server ... and they had this technology they had invented called SSL they wanted to use. the work is now frequently referred to as electronic commerce. there is also this related thing that was done called the payment gateway ... some number of past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

that we periodically observe was the original service oriented architecture (SOA) implementation.

As part of the effort of mapping SSL technology to trusted business processes ... we also did some detailed walk-thrus of some number of these new things calling themselves certification authorities. however, part of the effort was observing that for the trusted business process ... the user has to understand the relation between the website they think they are talking to and that website's URL. SSL then provides the verficiation between the URL and the website actually being talked to. This trust mechanism was dependent on the user providing/understanding the URL (as part of creating the trust chain between the website that the user thot they were talking to and the website they were actually talking to).

Almost immediately most electronic commerce sites found that use of SSL cut there thruput 5-20 times ... and so backed off SSL usage to just the checkout/pay portion. Now the user clicks on a button which in turn provides the URL. This effectively negates the original, fundamental basic trust assumptions for majority of SSL use in the world i.e. the user isn't providing the URL, the unvalidated, potentially fraudulent website is providing the URL..

This is when we started referring to SSL as a comfort mechanism rather than a trust mechanism, some number of past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

ssl digital certificate certification catch-22
https://www.garlic.com/~lynn/subpubkey.html#catch22

the biggest part of the ssl certificates are the certification authorities verifying that the entity applying for the certificate is associated with the specific domain.

the major justification for ssl certificates have been various concerns and issues with regard to the domain name system (assurance that the webserver that you are talking to really related to the URL ... this is only half of the end-to-end trust whether the webserver that you think you are talking to is really the webserver you are talking to).

in order to do this, the certification authorities require the certificate applicant provide a bunch of information. the certification authority then goes thru the time-consuming, error-prone, and expensive identification process of matching the supplied information with the information on-file with the domain name system (for the specific domain, aka the very same domain name system that has the integrity issues and is the motivation for ssl certificates).

so somewhat with the backing of the certification authority industry, there is a proposal that a publickey registered at the some time a domain is registered. then the certification authority industry can require that a ssl certificate application be digitally signed. Then they can retrieve the onfile public key from the domain name system and verify the digital signature, replacing a time-consuming, error-prone and expensive identification process with a fast, efficient, and reliable authentication process.

an issue is that if the certification authority industry can start doing realtime retrievals of onfile public keys for trusted authentication purposes ... then possibly the rest of the world might also ... eliminating the need for the ssl digital certificates.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 08:32:31
peter@taronga.com (Peter da Silva) writes:
The thing that really distinguished Multics from other operating systems wasn't any kind of security model, I don't think, but rather the generalization of memory mapping as the main (of not quite sole) data access method.

so with multics on the 5th flr and the science center on the 4th flr
https://www.garlic.com/~lynn/subtopic.html#545tech

... at some point, I figured that anything that multics could do, i could do. i implemented a memory mapped implementation of the cms filesystem. at api level, it emulated standard cms filesystem semantics ... but underneath it used memory mapping implementation.

besides feature/function, some amount of this was performance issue. standard I/O paradigm was real addresses ... in transition to virtual memory ... there was significant overhead simulating the I/O real storage paradigm in a virtual memory environment. moving to a memory-mapped implementation eliminated all that simulation overhead. however, except for limited use in xt/370, it never saw customer release ... but I deployed at internal sites ... misc. past posts
https://www.garlic.com/~lynn/submain.html#mmap

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 10:33:16
jmfbah <jmfbahciv@aol> writes:
I think what distinguishes Multics from other OSes was how those guys got things approved, coded, tested, and shipped. Those rules were based on secure code as the number one priority and all trade offs were to favor that.

What nobody has been able to study and analyze is how these inhouse procedures helped form the OS and how it provided computing services.


re:
https://www.garlic.com/~lynn/2008h.html#73 Microsoft versus Digital Equipment Corporation

science center had significant forces on it to provide high integrity ... not only in cp67 implementation but also the cp67 time-sharing service that was deployed.

this is past reference to apparently significant cp67 deployment across the gov.
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

this is indirect reference to gov. using cp67 successor, vm370 for email at a number of locations ... not just the one mentioned
https://www.garlic.com/~lynn/2008h.html#46 Whitehouse Eamils Were Lost Due to "Upgrade"

also, as i've mentioned numerous times, the science center had converted apl\360 to cms\apl (including a lot of work on adapting it to operate in "large" virtual memory environment and also implementing access to system facilities). in this period, apl was used for a lot of things that are implemented in spreadsheets today ... as well as very sophisticated business/analytical models. one of the "customers" that used this on the cambridge system was the business planners from hdqtrs (armonk) .. who loaded the most sensitive/valuable of all corporate data on the cambridge system to run business models.

part of the issue was that there were a significant number of non-employees from educational institutions in the cambridge area that also had access to the system (couldn't address certain kinds of issues with simple airgaping).

various past posts mentioning apl-use (&/or world-wide cp67/vm370 based HONE sales&marketing support system ... which had nearly all of its applications implemented initially with cms\apl and then later apl\cms):
https://www.garlic.com/~lynn/subtopic.html#hone

various other recent posts mentioning cited gov. reference:
https://www.garlic.com/~lynn/2008b.html#4 folklore indeed
https://www.garlic.com/~lynn/2008c.html#60 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008d.html#32 Interesting Mainframe Article: 5 Myths Exposed
https://www.garlic.com/~lynn/2008f.html#67 Virtualization's security threats
https://www.garlic.com/~lynn/2008f.html#68 Virtualization's security threats
https://www.garlic.com/~lynn/2008g.html#26 CA ESD files Options
https://www.garlic.com/~lynn/2008g.html#58 Virtualization: History repeats itself with a search for security

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 10:42:55
jmfbah <jmfbahciv@aol> writes:
I think what distinguishes Multics from other OSes was how those guys got things approved, coded, tested, and shipped. Those rules were based on secure code as the number one priority and all trade offs were to favor that.

What nobody has been able to study and analyze is how these inhouse procedures helped form the OS and how it provided computing services.


re:
https://www.garlic.com/~lynn/2008h.html#73 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#74 Microsoft versus Digital Equipment Corporation

now this isn't exactly a apples&apples comparison. this was a large gov multics processor datacenter. they started looking at newer generation of machines.

43xx machines (most oftern with vm) were selling into the same mid-range market as vax/vms. 43xx outsold vax in the market place ... possibly because some of the large commercial customers were making 43xx orders in multiples of hundreds at a time.

in any case, old email
https://www.garlic.com/~lynn/2001m.html#email790404
https://www.garlic.com/~lynn/2001m.html#email790404b
in these posts
https://www.garlic.com/~lynn/2001m.html#12 Multics Nostalgia
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

references a multics customer initially looking at twenty 4341s but growing to 210

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 11:23:39
peter@taronga.com (Peter da Silva) writes:
That's the opposite of what I'm talking about.

What you're describing has been done on UNIX too, I've seen at least two independent presentations at various Usenixes. For some applications you get better performance, for others you have to turn around an implement stream read-ahead and cache behaviour below the memory management level to get the performance back.

With a unified buffer cache the difference between read() and memory mapping gets even smaller.

This is pretty much an implementation detail, without the API that made it really interesting.


re:
https://www.garlic.com/~lynn/2008h.html#73 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/submain.html#mmap

because of existing deployed infrastructure ... I had to do a compatibility layer ... even tho other parts I rewrote could do a lot more stuff that was strictly available in the compatibility layer. old posts mentioning some of the difficulties with adapting existing implementation to memory mapped paradigm
https://www.garlic.com/~lynn/submain.html#adcon

for other folklore ... old email references regarding porting the implementation (and many other features) from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

I then put together a package that I supported for internal distribution. I've mentioned in the past it wasn't fair to directly compare the stuff from the 4th flr with what went on the 5th flr in terms of things like number of customers.

An issue was that there were a very large number of customer installations ... which was larger than the number of internal installations ... and the total number of internal installations were much larger than the number of internal installations that I directly shipped/supported a highly modified system. However, at one point the number of internal installations that I directly shipped & supported was approx. equivalent to the total number of MULTICS systems that ever existed.

It wasn't really fair to compare the whole virtual machine product effort with work going on the 5th flr ... it was much more fair to compare just what I was doing personally on the 4th flr with the whole MULTICS operation on the 5th flr.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 14:22:59
Mark Crispin <MRC@Washington.EDU> writes:
But there was a worse horror lurking in the KL architecture. There was a hardware page table that cached the virtual/physical memory map. That way, most of the time it never had to go to the page tables. Like the cache, this had to swept at every context switch.

360/67 had eight entry associative array for this function. there was control register that specified that active virtual address space (or "STO" ... segment table origin address). every time the control register was loaded the associative array had all the entries cleared (even if loading the same exact value).

370s had a variety of implementations. 165&168 had TLB (table look aside buffer) ... 128 entries, 4-way associative (five bits from virtual address were used to do index one of 32 sets of 4entries. The TLB had a 7-entry "STO stack" ... i.e. TLB entries could be associated with one of seven address spaces (or STO addresses). When control register was reloaded with new STO ... it would check to see if the value was already in the STO-stack. If it is, nothing more. If not, one of the STO-stack entries is "scavenged" and all TLB entries with matching 3-bit value (corresponding to the STO entry) were cleared. As systems got larger and more complex, the STO-stack avoided having to wipe out all cached information on every context switch.

original 370 architecture included PTLB (purge all table look-aside entries), ISTO (purge all table look-aside entries for specific virtual address space), IPTO (purge all table look-aside entries for a specific segment within an address space), and IPTE (purge table look-aside entries for a specific page table entry).

because of schedule constraints with retrofitting virtual memory hardware to 370/165, all support was dropped for all but the PTLB instruction (other stuff in the original virtual memory architecture was also dropped).

3090 got somewhat more complex ... old long-winded email description
https://www.garlic.com/~lynn/2003j.html#email831118
in this post
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 14:47:44
Mark Crispin <MRC@Washington.EDU> writes:
It was a tremendous kludge tower and it's a miracle it ran at all, but run it did. I remember a 360/67 that, in addition to HASP-based batch, ran CALL-OS, APL\360, ATS, and Coursewriter as timesharing systems. Most of the timesharing users were on CALL-OS.

lots of related detail on history of 360/67, project mac, multics, science center, tss/360, cp67, etc in Medlinda's history paper found here
http://www.leeandmelindavarian.com/Melinda#VMHist

lots of universities and other customers ordered 360/67 ... basically 360/65 with virtual memory hardware added ... for running with tss/360. tss/360 had all sorts of development and product delivery problems. as a result many of the customers dropped by to running the machine in straight 360/65 mode ... with standard os/360 batch operating system with pure real storage addressing.

some number of "online" "subsystem" facilities were developed for batch os/360 operation ... which took dedicated real storage and did real-storage sub-allocation, dispatching and supported terminal interaction. They tended to have to play some tricks with "swapping" application software in/out of real storage. The other choice was that all available executable software was preloaded in real storage and online operations only consisted of selecting for execution those programs already loaded. Then only data had to be moved in/out associated with specific online operations. In addition to those mentioned, there was also stuff like CICS.

In parallel with this, the science center ... 4th flr, 545 tech sq,
https://www.garlic.com/~lynn/subtopic.html#545tech

developed cp67/cms for the 360/67 which did take advantage of virtual memory hardware. Originally, the project was cp40 which was developed on a 360/40 with their own homebrew hardware modification for supporting virtual memory ... and then it morphed into cp67 when 360/67 became available.

there was some early contention between the science center cp67/cms group (something like 10-12 people) and the tss/360 group (something like 1000-1200 people at its peak) ... with the tss/360 group feeling that the science center was undermining the tss/360 activity.

Another similar activity to cambridge was univ. of michigan which built its own virtual memory operating system for 360/67 called MTS. MTS was pure virtual memory operating system ... while cp67 also included virtual machine support.

Another implementation by university to provide online system for 360/67 was wylbur/orvyl by stanford ... but was of the online, subsystem variety running under os/360 in real storage mode. current web page:
http://www.stanford.edu/dept/its/support/wylorv/

some number of old posts mentioning MTS (Michigan terminal system)
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2003j.html#54 June 23, 1969: IBM "unbundles" software
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004.html#47 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#25 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#18 IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
https://www.garlic.com/~lynn/2005g.html#56 Software for IBM 360/30
https://www.garlic.com/~lynn/2005k.html#20 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005s.html#17 winscape?
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2006e.html#31 MCTS
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006k.html#41 PDP-1
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006m.html#42 Why Didn't The Cent Sign or the Exclamation Mark Print?
https://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007j.html#6 MTS *FS tape format?
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007q.html#15 The SLT Search LisT instruction - Maybe another one for the Wheelers
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at Lincoln Labs
https://www.garlic.com/~lynn/2007u.html#23 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007u.html#84 IBM Floating-point myths
https://www.garlic.com/~lynn/2007u.html#85 IBM Floating-point myths
https://www.garlic.com/~lynn/2007v.html#32 MTS memories
https://www.garlic.com/~lynn/2007v.html#47 MTS memories
https://www.garlic.com/~lynn/2008h.html#44 Two views of Microkernels (Re: Kernels

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 12 May 2008 15:24:02
peter@taronga.com (Peter da Silva) writes:
Oh, I'm not complaining that you didn't turn it into Multics, and I quite understand the need to remain compatible (that's what motivated the Usenix papers I was referring to as well... and I don't recall them referencing you... shame on them :-> ).

re:
https://www.garlic.com/~lynn/2008h.html#73 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#76 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

no, it wasn't i didn't do multics memory mapped object type stuff ... after all they were on the 5th flr ... and i was just down the stairs on the 4th flr ... and we all tended to eat at the same lunch places ... it was that I also had to do the compatibility layer for traditional filesystem semantics.

however, since it wasn't a univ. activity ... and majority of the stuff only shipped internally ... there wasn't any external documents that people outside the company could reference.

in fact, getting some of the even trivial stuff published outside the company would meet a lot of resistance.

the following is tale of taking almost a year to get a small amount of information regarding global LRU page replacement algorithms out. I had worked with Jim Gray in some of these areas since the original relational/sql implementation was done on vm370 ... and system/r used some facilities that didn't officially ship in product
https://www.garlic.com/~lynn/submain.html#systemr

when Jim left research for Tandem ... he foisted off some amount of his internal & external contacts on me. However, later, one of his co-workers at Tandem was in the process of getting a PHD from Stanford and his work was in the area of global LRU page replacement algorithms ... and there was an enormous amount of resistance to awarding the PHD.

I had done a lot of global LRU page replacement work in the 60s as an undergraduate. Later the grenoble science center had done a "working set dispatcher" on the same cp67 base running on 360/67 ... that included "local LRU" page replacement strategy. They had also done a CACM article on the work ... but I was provided with a lot of their backup performance details. It turns out that

the cambridge science cp67 operation with my global LRU strategy running on 768kbytes 360/67 (about 104 4k pageable pages after fixed memory requirements) with 75-80 users

got about the same response and throughput as

the grenoble science cp67 operation with their local LRU strategy running on 1mbyte 360/67 (about 155 4k pageable pages after fixed memory requirements) with 30-35 users.

aka grenoble with 50percent more pageable memory and half the number of users (& local LRU stragegy) got about the same throughput as the cambridge system (with global LRU strategy) ... with the users running same sort of workload.

anyway ... as noted in copy of this old communication ... it took almost a year to get approval to send even this little bit of details (even tho most of my work had been done as an undergraduate and the most of the grenoble information had been in a CACM article)
https://www.garlic.com/~lynn/2006w.html#email821019
in this post
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 13 May 2008 09:55:06
"Rostyslaw J. Lewyckyj" <urjlew@bellsouth.net> writes:
We know that IBM in its wisdom built a diagnostic support computer into its 3090 models and probably into all the later models. We have discussed that the modern systems don't offer very much in the way of diagnostics or help in debugging. You may remember that a couple, or so, of years ago IBM did put in a support processor into one of its lap tops. This was supposed to save the current configuration to allow a restart for system crashes.

recent post in thread mentioning 3090
https://www.garlic.com/~lynn/2008h.html#77 Microsoft versus Digital Equipment Corporation

i had been dragged into some of the flap over the 3090 service processor.

there was also a service processor for 3081. the issue was that the field engineering division that serviced customer machines had a process that required being able to do a "bootstrap" diagnostic ... starting with scope. things like TCM modules could no longer be scoped. The result was that the service processor was a "scopable" machine ... which then had probes built into all sorts of places ... which then could be used by FEs to diagnose the actual machine.

partially because of the growing complexity ... a battle was won to make the 3090 service processor a (scopable) 4331 process running a highly customized version of vm370 release 6. A lot of the "operating system" code that had to be scaffolded from scratch in the 3081 service processor was built in on cms running in vm370 virtual machines. Service processor menus were built using IOS3270 ... rather than engineers actually having to do the low-level display device driver as part of doing service processor menus.

This was still in the period where the favorite son operating system believed that there was no requirement for the virtual machine effort and viewed it as competition. The engineer responsible for the vm370 part of the 3090 service processor took an enormous amount of heat for the decision.

Eventually, the decision was made to replace the 4331 as service processor with a pair of 4361s (still running all the highly customized vm370/cms code). The redundancy of a pair of 4361s for service processor, also mitigated the requirement for FE bootstraped "scope'able" diagnostic process.

vm370 disfavor in the corporation began to lesson when it was necessary to start moving more an more of the function into the microcode of the machines. Modern "LPARS" have a larger subset of vm370 virtual machine function built into the native "hardware". This started with Amdahl introduction of native hardware "hypervisor" support in their machine and the 3090 response was the eventual development and deployment of PR/SM support.

other recent posts mentioning 3090
https://www.garlic.com/~lynn/2008.html#49 IBM LCS
https://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
https://www.garlic.com/~lynn/2008c.html#68 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2008d.html#57 Fwd: Linux zSeries questions
https://www.garlic.com/~lynn/2008d.html#64 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2008d.html#82 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2008e.html#22 Linux zSeries questions
https://www.garlic.com/~lynn/2008e.html#31 IBM announced z10 ..why so fast...any problem on z 9
https://www.garlic.com/~lynn/2008e.html#33 IBM Preview of z/OS V1.10
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#8 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#25 CA ESD files Options
https://www.garlic.com/~lynn/2008g.html#41 Was CMS multi-tasking?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 13 May 2008 10:03:39
Anne & Lynn Wheeler <lynn@garlic.com> writes:
this is past reference to apparently significant cp67 deployment across the gov.
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml


re:
https://www.garlic.com/~lynn/2008h.html#74 Microsoft versus Digital Equipment Corporation

I actually didn't hear about the above activity until some years after the fact ... although when I was an undergraduate ... and doing lots of kernel enhancements that was being picked up and shipped in the product ... i would get requests to do specific kinds of enhancements. From later perspective, some of the requests likely originated from such gov. institutions since they could be considered security in nature (a decade ago, I would make the comment that some of the current operating system vendors might not even know what they don't know about security issues).

it use to be that large corporations frequently hired their CSOs from employees of fed. gov. institutions ... particularly focused on personal and/or physical security kind of issues. Not long after joining the science center, the corporation hired one such former gov. employee as CSO ... and I got tasked to periodically run around with him as the computer security expert (and some amount of physical security issues rubbed off).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Tue, 13 May 2008 10:45:20
Peter Flass <Peter_Flass@Yahoo.com> writes:
Maybe not that, but I get awefully tired of young snots raving about how the wonderful, new VM technology is the second coming of the Deity.

re:
https://www.garlic.com/~lynn/2008h.html#64 New test attempt
https://www.garlic.com/~lynn/2008h.html#67 New test attempt
https://www.garlic.com/~lynn/2008h.html#68 New test attempt
https://www.garlic.com/~lynn/2008h.html#69 New test attempt
https://www.garlic.com/~lynn/2008h.html#70 New test attempt

i don't think that it is a question about all the claimed attributes other than whether specific individuals could have access ... that they wouldn't otherwise be able to have access.

I had free and plentiful access as an undergraduate ... and totally unrelated to relationship or training with a vendor. The issue wasn't whether or not that was true ... but in that environment there were a large number of individuals for which such access couldn't be justified.

think of it sort of as return-on-investment. the technology went thru numerous generations as the system costs dropped from millions, to hundreds of thousands, to tens of thousands, to thousands, to hundreds. At hundreds, the justification ROI for access has a very low threshold ... totally independent of all the other characteristics and attributes about use.

home access as an attribute isn't an issue ... i've had home access since mar70 ... however it was dial-in terminal into the datacenter ... where i would run cp67 systems in a 360/67 virtual machine running under a cp67 system running on a real 360/67 machine.

ignore all the other claimed attrbutes ... the only specific/significant difference between now and 40 yrs ago ... is that at hundreds of dollars per system ... the ROI threshold for lots of individuals is extremely low. they may also feel irritated that their access may have been restricted when it couldn't be justified when system costs were at higher level.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Java; a POX

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java; a POX
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 13 May 2008 12:48:07
ivan@VMFACILITY.FR (Ivan Warren) writes:
Assembler is for sissies ! Using a keyboard to input hexadecimal numbers if for script kiddies..

Real programmers flip switches to select the address, flip some more switches to input the data, rotate the mode switch to 'store' and press the execute button.


i've been known to (026) multi-punch patches in duplicated 12-2-9 "TXT" cards for program patches ... as well as entering storage modifications from the front panel of 360/67. Knowing format of 12-2-9 "TXT" cards ... needed to be able to "read" punch holes ... so "fanning" deck would quickly locate the card with the data for the specific program address with the instruction(s) to be patched.

old posting of "real programmers" (also "real engineers")
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:

when the future system project was in full swing
https://www.garlic.com/~lynn/submain.html#futuresys

... the (highly classified) future system architecture documents had been "secured" on vm370 systems. I had some dedicated weekend time in a machine room with one such vm370 systems. My time was on 370/145 ... and the "secured" vm370 system with future system documents was some other machine in the room.

one of the people made some rash claim that they had so secured the documents on the system that even I wouldn't be able to gain access ... even if i was left alone in the room.

well, how could i resist???

I first asked them to disable all external acess to the machine (all terminals connecting to the machine from outside the machine room).

From the 370/145 front console, I patched one bit in real storage and had access to everything on the machine. I mentioned that countermeasure to this attack was to require authentication before being able to use the front panel functions.

From the front panel, I had flipped a bit in conditional branch instruction in the password verification routine ... resulting in anything entered, always being treated as valid password.

for a little drift ... recent, (mainframe) security related thread
https://www.garlic.com/~lynn/2008h.html#74
https://www.garlic.com/~lynn/2008h.html#81

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 13 May 2008 15:03:24
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
I believe it will run without swap, though, if needed.

For many years virtual memory systems allocated in virtual memory, such that there was always virtual memory to back up any page in real memory. (That is, virtual memory was never larger than the swap file size.) Now, most that I know of don't require it such that virtual memory is swap size plus real memory size (minus the unswappable part of the system).

I know have running a LinuxLive CD which runs entirely from CD-ROM with no hard disk needed. (It helps to have a lot of real memory.) It uses a union file system that allows one to write in any directory, though such files won't survive a reboot.


this is possibly the "no-dup" vis-a-vis "dup" strategy. one of the advantages of "dup" ... is if you fetch a virtual page into real-storage ... the allocation on disk is retained (i.e. "duplicate"). the advantage is if that virtual page is selected for replacement, it doesn't have to be written back out ... since a duplicate/copy still exists on disk.

The "no-dup" strategy deallocates the disk space when a page is brought into real storage. The advantage is less disk space is required ... especially as real storage got larger relative to disk sizes (i.e. there was a period where it was not unusual to have a 1gbyte real storage ... and disk sizes were in the range of 4-9 gbytes).

over the years i've implemented both strategies ... included scenario where it might dynamically switch back&forth between the two strategies ... especially when there was multiple kinds of backing store.

misc. past posts mentioning duplicate/no-duplicate strategies:
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller chache
https://www.garlic.com/~lynn/2007l.html#61 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Mainframe programming vs the Web

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe programming vs the Web
Newsgroups: bit.listserv.ibm-main
Date: Tue, 13 May 2008 15:35:58
m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
I beg to differ. Until recently, I was running Firefox without NoScript and frequently found that both and swap space had filled up, requiring that Firefox be recycled (at a minimum) or that Linux be rebooted. I keep my software up to date.

With the installation of Noscript, and enabling Javascript only when I need it, those problems seem to be a thing of the past. At the very least, they are far less frequent.


re:
https://www.garlic.com/~lynn/2008b.html#32 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#35 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008h.html#71 Mainframe programming vs the Web

earlier than version 3 (possibly sometime in last six months) would go over a gigabyte with more than 400-600 open tabs ... but they seem to have fixed up quite a bit of stuff ... and same stuff will stay in around 1/2gbyte (say mbyte per open tab).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

What mode of payment you could think of with the advent of time?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 13, 2008
Subject: What mode of payment you could think of with the advent of time?
Blog: Web Development
I'm biased since I spent quite a bit of time in the mid-90s working on the x9.59 financial industry transaction standard.
https://www.garlic.com/~lynn/x959.html#x959

The x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. As a result we had to look at all kinds of retail payments (credit, debit, stored-value, etc), and all kinds of payment modes (point-of-sale, face-to-face, internet, etc), and lots of different kinds of technologies (contact, contactless, cellphone, etc) ... as well as be able to handle a range of timing constraints & power limitations (i.e. transportation transit gate timing contraints along with contactless power limitations).

part of the work in x9a10 was detailed study of all possible technologies as well as detailed threat and vulnerability studies. One of the identified areas was much of existing infrastructure is vulnerable to attackers evesdropping, skimming, and/or otherwise acquiring information from existing transactions (lots of news about data breaches and security breaches ... related to forms of identity theft) ... where the attackers can leverage the information to perform fraudulent financial transactions.

Part of the x9.59 standard wasn't to do anything about trying to better hide the information (as a countermeasure to such fraudulent transactions) ... however, x9.59 financial standard did slightly tweak the paradigm so that all that information is no longer useful to the crooks ... eliminating their ability to use the information for performing fraudulent transactions.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New test attempt

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New test attempt
Newsgroups: alt.folklore.computers
Date: Tue, 13 May 2008 16:36:25
"Dave Wade" <g8mqw@yahoo.com> writes:
Thats true, but just because you didn't have computers in your home didn't have enough access to one to write contensious software. When I worked on Mainframes I had almost a free run with the hardware provided I did my work. I don't think any body would have stopped me producing PGP had I needed it.....

actually we looked at a sort of pgp like thing in the early 80s for the internal network (larger than arpanet/internet from just about the beginning until about mid-85):
https://www.garlic.com/~lynn/subnetwork.html#internalnet

misc. old public key &/or other crypto related email
https://www.garlic.com/~lynn/lhwemail.html#publickey

...


https://www.garlic.com/~lynn/2007d.html#email810506
in this post
https://www.garlic.com/~lynn/2007d.html#49 certificate distribution

and


https://www.garlic.com/~lynn/2006w.html#email810515
in this post
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network

but it wasn't until the late 90s that crypto restriction started to be relaxed. I've posted before that in the 80s, there was the claim that the internal network had over half of all the link encryptors in the world.

I was somewhat annoyed in the HSDT project (at one point we wanted to bid on the NSFNET backbone but were prevented, the director of NSF tried to intercede by writing a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) ... but that just made the situation worse)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

with the cost of the link encryptors for the high-speed links and so got involved in some stuff that would be significantly more function and handle significantly higher data rates (and could be built for <$100). Somewhere along the way we essentially got told we had done too good of a job; there were three kinds of crypto: 1) the kind they don't care about, 2) the kind you can't do, and 3) the kind you can only do for them (in this case, it was allowed we could do it ... it just was that "they" would be the only customer).

misc. posts in this thread:
https://www.garlic.com/~lynn/2008h.html#64 New test attempt
https://www.garlic.com/~lynn/2008h.html#67 New test attempt
https://www.garlic.com/~lynn/2008h.html#68 New test attempt
https://www.garlic.com/~lynn/2008h.html#69 New test attempt
https://www.garlic.com/~lynn/2008h.html#70 New test attempt
https://www.garlic.com/~lynn/2008h.html#82 New test attempt

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Annoying Processor Pricing

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Annoying Processor Pricing
Newsgroups: alt.folklore.computers
Date: Wed, 14 May 2008 15:09:04
greymaus <greymausg@mail.com> writes:
I remember that story, can't remember the model. IBM would still own the machine, right?, so the customer would not have the right to throw that switch him/herself.?

one of the issues of getting commercial viable timesharing into the market. various of the cp67 based commercial timesharing service bureaus were running off leased machines.
https://www.garlic.com/~lynn/submain.html#timeshare

getting 7x24 offering was getting offshift expenses reduced significantly ... because offshift use tending to be quite spotty.

two big issues were:

1) cpu meter ... which was what was used for monthly lease charges. it normally ran when the processor was busy and/or channel programs active ... and would "coast" for 400milliseconds after everything went idle. the trick was to come up with a channel program that would be active as far as being able to take incoming data from remote sources (dial-in terminals, etc) ... but avoid having the cpu meter run when there were actually bytes being transferred.

2) most of the systems required fairly high level of care&feeding by onsite operators. the early efforts in this area frequently came under the heading of automated operator ... i.e. automating things that (most systems of the period) required some manual effort.

for some topic drift ... as i've mentioned before, my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture (mainframe for cluster). while there she created peer-coupled shared data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

which, except for "ims hot-standby" didn't see any uptake until sysplex.

however, about a decade ago, we were visiting one of the major financial transaction networks and the person running the operation commented that they had gone for years with one hundred percent availability ... and that was primarily attributed to
1) ims hot-standby
2) automated operator


i.e. as both hardware and software became more reliable ... sources of failures/outages were a) various kinds of enviromental glitches (their ims hot-standby included remote systems at geographic distances) and b) human mistakes (almost entirely eliminated with automated operator).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Credit Crisis Timeline

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: THU MAY 15, 09:26:00 AM EST
Subject: Credit Crisis Timeline
Blog: Credit Crisis Timeline
re:
http://edsforum.blogspot.com/2008/05/credit-crisis-timeline.html

toxic CDOs user used two decades ago during the S&L crisis to obfuscate underlying value
http://articles.moneycentral.msn.com/Investing/SuperModels/AreWeHeadedForAnEpicBearMarket.aspx

long-winded, decade old post including mention of needing visibility into CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

PBS program looking at the repeal of Glass-Steagall ... which had been passed after the crash of '29 to keep the risky, unregulated investment banking separate from safety & soundness of regulated banking
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/

recent post drawing analogy toxic CDOs subverting Boyd's OODA-loop
https://www.garlic.com/~lynn/2008g.html#4

Business school article estimating possibly 1000 responsible for 80% of the current mess and it would go a long way if the gov. would figure out how they might loose their job
http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933 (gone 404 and/or requires registration)

...

here is another

Understanding the Subprime Mortgage Crisis
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1020396

i have some of my comments here:
https://www.garlic.com/~lynn/2008h.html#48

basically "subprime" loans were targeted at first time buyers with no credit history. they were subprime in another sense ... since the initial teaser rate was well below prime rate. The stats say that majority (nearly 2/3rds) of the "subprime" loans actually went to borrowers that had good credit history (not the original intended audience) ... possibly speculators that planned on flipping the property before the teaser period finished.

CDOs were used to obfuscate the underlying value. When problems started showing up with rating of (subprime) toxic CDOs, there was a rush to dump all toxic CDOs (regardless of kind). Analogy is contamination of consumer products ... they are all recalled or pulled off the shelf and dumped in the trash (since nobody really knows how many are at risk).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: Thu, 15 May 2008 14:04:46
jmfbah <jmfbahciv@aol> writes:
It was worse than that. CDOs eliminated any requirement for any value. That reminds me of a College here whose managers bankrupted it by pocketing loans using expensed items as collateral. The banks gave these loans out merrily; I always suspected kickbacks in that one. This was before Milken came up with his scenarios.

one of the business shows commented on Bernanke's statements today saying that he has gotten quite repetitive about new regulations (like Basel2) fixing the situation.

they then went on to say that american bankers are the most inventive in the world ... that they have managed to totally screwup the system at least once a decade regardless of the measures put in place attempting to prevent it.

recent cross-over comment in credit crisis timeline blog entry
https://www.garlic.com/~lynn/2008h.html#89 Credit Crisis Timeline

for other topic drift ... there was a recent comment in another blog about one of the contributing factors to the questionable ratings for CDOs, referred to as "shopping <something>" ... the mortgage orginators would shop their toxic CDOs around to the different ratings services until that got a rating they wanted ... aka the mortage orginators were paying for the rating on the toxic CDOs they were selling ... and they would shop around the rating institutions until they found one that would give them the rating they wanted.

some of that is related to some of my general comments about institutions that are paying for rating/ranking/audit being able to influence the rating/ranking/audit:
https://www.garlic.com/~lynn/aadsm28.htm#46 The bond that fell to Earth
https://www.garlic.com/~lynn/aadsm28.htm#61 Is Basel 2 out...Basel 3 in?
https://www.garlic.com/~lynn/aadsm28.htm#66 Would the Basel Committee's announced enhancement of Basel II Framework and other steps have prevented the current global financial crisis had they been implemented years ago?
https://www.garlic.com/~lynn/aadsm28.htm#67 Would the Basel Committee's announced enhancement of Basel II Framework and other steps have prevented the current global financial crisis had they been implemented years ago?
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#57 independent appraisers
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?

past posts in this thread:
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#32 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#48 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#49 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#51 subprime write-down sweepstakes

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 15 May 2008 16:10:03
Andrew Swallow <am.swallow@btinternet.com> writes:
Also does write-back preserve the update order used by the software? The order matters when performing inter-CPU (core) communication.

general write-backs are typical cache-replacement algorithm ... i.e. cache line is written back ... when the cache-line is needed for some other piece of data.

multiprocessor (hardware processor) caches have gone thru something analogous evoluation as distributed database caches.

at one point multiprocessor caches would assume always exclusive control ... if it was in any other cache ... it had to be flushed to memory first ... before being fetched.

then there were optimizations that duplicate r/o copies would be allowed ... and flushing would only happen if some processor needing to modify the cache line.

small number of processors might arbitrarily broadcast invalidate signals for any piece of data that it intended to take exclusive control. various other similarities.

one of the things for smp test&set locking from the 60s was that frequently write-thru was assumed ... i.e. bracket critical section with test&set lock ... test&set instruction would exclusively set the storage location (serializing across all operations in the system) and then processor was assumed to have exclusive access to the storage "inside" the test&set ... until the test&set lock was cleared/reset.

when charlie was doing smp work on cp67 at the science center,
https://www.garlic.com/~lynn/subtopic.html#545tech

he invented smp compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

which found its way in 370 processor line (after overcoming some amount of resistance). compare&swap instruction not only could be used lock serialization (ala test&set instruction) ... but could also be used for directly updating storage locations (w/o requiring separate locking operation). In the situation where compare&swap was directly updating storage ... it could special case the serialization across multiple caches (for both the store-thru as well as store-into/write-back scenarios). we took advantage of this in design of a 16-way 370 SMP that we worked on in the mid-70s (that never shipped as a product) where the caches were not otherwise coordinated ... except in the case of compare&swap instruction.

About the same time that we were working on distributed lock manager for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
& distributed database scale-up ... referenced in this post:
https://www.garlic.com/~lynn/95.html#13
and these old emails:
https://www.garlic.com/~lynn/lhwemail.html#medusa

we also had gotten involved with SCI, which included a standardized specification for (hardware processor cache) memory consistency using a directory based protocol.

There were some syngergy between the SCI hardware processor cache consistency specification and what we were implementing in the ha/cmp distributed lock manager scale-up.

For other drift, early on, we had gotten feedback from one of the RDBMS vendors about the "ten things that were wrong in the vax/vms cluster lock manager". In part, because we were starting from scratch and didn't have worry about a lot of legacy issues ... we had more latitude in our scale-up distributed lock manager implementation.

for other topic drift ... past posts mentioning original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

misc. past posts mentioning ha/cmp DLM activity:
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005f.html#32 the relational model of data objects *and* program objects
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006o.html#33 When Does Folklore Begin???
https://www.garlic.com/~lynn/2007c.html#42 Keep VM 24X7 365 days

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Annoying Processor Pricing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Annoying Processor Pricing
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 15 May 2008 17:57:39
Quadibloc <jsavard@ecn.ab.ca> writes:
Think of Seymour Cray using an Apple computer to design a Cray... and Apple using a Cray to design their Macintoshes. When your volumes are big, it makes sense to do what appears to be an insane design effort to slash a few pennies from the manufacturing cost.

one of the people i worked with, had left and worked on various projects including using a cray to simulate macintosh interface human factors. using cray and extremely high-speed display, it was possible to do detailed simulation of wide variety of visual/graphic features involved in human interactions (varying resolution, speed, etc).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Annoying Processor Pricing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Annoying Processor Pricing
Newsgroups: comp.arch
Date: Thu, 15 May 2008 18:09:48
gavin@allegro.com (Gavin Scott) writes:
True, but I think it reflected the fact that most manufacturers were hardware-driven and saw the need for an operating system as more of an annoyance that got in the way of selling systems to customers.

Being able to run the "same" operating system as everyone else let the hardware guys get on with what they thought was important, making ever more whizzo hardware, with out regard to how good[1] the software that came with it was.


earlier computer generations had evolved very complex & expensive (proprietary) software. early 80s saw a significant in hardware related system costs ... frequently using commodity chips (instead of manufacturing built from a very large number of discrete components).

in effect, unix represented an analogous shift in cost reduction for software.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 15 May 2008 18:20:06
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
Compare and Swap compares one register with memory, and if the value is the same as the register, stores a different register into memory. If the compare fails, the current value is loaded into the register. A test of the condition code indicates that the compare failed and one should loop back to try again.

It seems that with CS the loop should use the value that CS loaded into the register. If one instead refetches the value from memory the result might be wrong due to the cache.


re:
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation

(the original) compare&swap (implementation) was defined to serialize/coordinate all multiprocessor and cache effects (in some implementations this was equivalent to enforcing very strong memory consistency ... even if other operations in the system only followed weak memory consistency definition)

lots of past smp &/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Old hardware

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old hardware
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 15 May 2008 19:46:07
eamacneil@YAHOO.CA (Ted MacNEIL) writes:
I went to the University of Waterloo (1976-1980), and I was told it was a 360/40, at the time. But, I do remember when it was replaced with a 303x machine.

quicky search engine; random web URL:
http://www.therecord.com/links/generic_020607102451.html

from above:
By 1967, the university had an IBM 360/75 the largest computer in Canada. It filled a room the size of a gymnasium and was designated as a backup for NASA's Apollo space missions.

... snip ...

and more:
http://www.cs.uwaterloo.ca/40th/Chronology/1974.shtml

other drift from above:
Waterloo SCRIPT was developed as one of the early text formatting systems. One version of SCRIPT was created at MIT and the AA/CS at UW took over project development in 1974. The program was first used at UW in 1975. In the 1970s, SCRIPT was the only practical way to word process and format documents using a computer. By the late 1980s, the SCRIPT system had been extended to incorporate various upgrades. (Cowan, Graham, Mackie et al. 29).

... snip ...

i.e. waterloo script was clone of the CMS (cp67 and/or vm370) script command created at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

various computer related pictures from waterloo, including 360/75
http://www.cs.uwaterloo.ca/40th/Chronology/1967.shtml

following has reference to 360/75 being put up for auction (and "the console was mounted on a wall in the Red Room")
http://www.cs.uwaterloo.ca/40th/Chronology/1979.shtml

picture of decommissioning 360/75
http://www.cs.uwaterloo.ca/40th/Chronology/1980.shtml

in the early 90s, we were doing several ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

marketing tours on the far side of the pacific. on one tour at a bank hdqtrs, a recent waterloo graduate happened to mentioned that they studied the wheeler scheduler at waterloo.

misc. scheduler related posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Old hardware

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old hardware
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 15 May 2008 22:32:14
wmhblair@COMCAST.NET (William H. Blair) writes:
In fact, so many 360/50 machines were being returned to IBM (off rental), being replaced by either a 370/155 or 370/165, that IBM was able to plan to build all core memory units for the 370/165 processor from used (although re-built with new electronics) 360/50 core memory. It took 4 of the usual size (512 KB) 360/50 machines to get enough core memory (2 MB) for the smallest 370/165. There was enough to go around. IBM, in fact, sent the vast majority of their off-rental 360/50 core memory to the landfill (for tax/depreciation reasons). They were happy to do so, because 360/40 and 360/50 customers were snapping up 370/155 boxes like hotcakes. A bunch got a hold of 370/165s as well, but unfortunately too many folks leased or bought one outright. (Both of these boxes were later to be deemed boat anchors when the OS/VS1 and OS/VS2 SVS + MVS announcement was made, since the "DAT box" upgrades to them were so horribly expensive.)

re:
https://www.garlic.com/~lynn/2008h.html#95 Old hardware

DAT box retrofit for 370/165 was a bear. vm370 had taken advantage and used a bunch of new features that were part of 370 virtual memory architecture ... especially in support of cms virtual machines ... and it was all running on 370/145s internally.

however, 165 engineers were having a hard time implementing the full 370 virtual memory architecture. finally there was an escalation meeting where the 165 engineers proposed dropping a whole bunch of 370 virtual memory hardware features in order to cut six months off their engineering development cycle. Eventually they got their way ... so that 370 virtual memory announce and ship didn't have to slip an additional six months. However, everybody else (both hardware & software) that had already finished implementing the full 370 virtual memory architecture had to go back and pull everything dropped on behalf of the 165 (this included vm370 having to put together a real Q&D kludge for things like supporting shared segments across cms virtual machines).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Is virtualization diminishing the importance of OS?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 17, 2008
Subject: Is virtualization diminishing the importance of OS?
Blog: Technology
There has been a number of articles over the past two years about how single, large monolithic operating systems have gotten extremely unwieldy, bloated and difficult to deal with. The reference to the virtualization metaphor is frequently virtual appliances ... attempting to inject some order, sanity and KISS back into server and personal computing.

In the late 60s and early 70s, the metaphor was being referred to as service virtual machines. At the time, it still required manual intervention to start the services at boot (even after automated booting was introduced).

I had created an automated startup process, originally as part of an automated benchmarking process ... but it was quickly adopted for general production operation and released in the standard virtual machine product.

old posts referencing automated benchmarking, workload profiling, and early work leading to capacity planning
https://www.garlic.com/~lynn/submain.html#bench

--
40+yrs virtualization experience (since Jan68), online at home since Mar70




previous, next, index - home