List of Archived Posts

2007 Newsgroup Postings (02/23 - 03/08)

ISA Support for Multithreading
Designing database tables for performance?
Securing financial transactions a high priority for 2007
The Genealogy of the IBM PC
The Genealogy of the IBM PC
Is computer history taugh now?
GCN at 25: VAX for the memory
A way to speed up level 1 caches
ISA Support for Multithreading
The Genealogy of the IBM PC
A way to speed up level 1 caches
A way to speed up level 1 caches
Securing financial transactions a high priority for 2007
time spent/day on a computer
Cycles per ASM instruction
The Genealogy of the IBM PC
Attractive Alternatives to Mainframes
A way to speed up level 1 caches
Is computer history taught now?
Cycles per ASM instruction
Securing financial transactions a high priority for 2007
A way to speed up level 1 caches
A way to speed up level 1 caches
Securing financial transactions a high priority for 2007
Securing financial transactions a high priority for 2007
A way to speed up level 1 caches
Securing financial transactions a high priority for 2007
IBM S/360 series operating systems history
Securing financial transactions a high priority for 2007
Securing financial transactions a high priority for 2007
Health Care
Quote from comp.object
I/O in Emulated Mainframes
IBM S/360 series operating systems history
Is computer history taught now?
FBA rant
Quote from comp.object
Quote from comp.object
FBA rant
FBA rant
FBA rant
IBM S/360 series operating systems history
FBA rant
FBA rant
Is computer history taught now?
time spent/day on a computer
FBA rant
time spent/day on a computer
time spent/day on a computer
Is computer history taught now?
Is computer history taught now?
FBA rant
US Air computers delay psgrs
time spent/day on a computer
time spent/day on a computer
time spent/day on a computer
Grilled Turkey
Health Care
Securing financial transactions a high priority for 2007
FBA rant
FBA rant
Securing financial transactions a high priority for 2007
Securing financial transactions a high priority for 2007
FBA rant
FBA rant
Securing financial transactions a high priority for 2007

ISA Support for Multithreading

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISA Support for Multithreading
Newsgroups: comp.arch,comp.arch.embedded
Date: Fri, 23 Feb 2007 10:00:31 -0700
"Chris Thomasson" <cristom@comcast.net> writes:
DWCAS instruction and a hardware generated "heartbeat-like" interrupt per-synchronization epoch should be all you need to create lock-free algorithms that scale up to many thousands of cores...

prev. post in thread
http://www.garlic.com/~lynn/2007d.html#67 ISA Support for Multithreading

CAS was invented by Charlie when he was doing work on fine-grain locking for cp67 (CAS was chosen because they are Charlie's initials ... afterwards had to come up with mnemonic that matched his initials).

the initial attempts to get CAS as part of 370 was met with some amount of resistance ... claiming that a smp specific instruction couldn't be justified for 370 (test&set was sufficient) ... and to get inclusion in 370 ... a non-SMP justification needed to be created. As a result, came up with the use of CAS as atomic update in a (software) multi-threaded environment (whether running single processor or multi-processor environment) ... along with the various programming notes that were included (originally) in the 370 principle of operations. Also as part of that exercise, both single word and double word version was done and the mnemonics changed to CS (compare&swap) and CDS (compare double and swap).

since then instructions have been extended for both 32bit and 64bit mode ... more recent compare and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/7.5.28?DT=20040504121320

and compare double and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/7.5.28?DT=20040504121320

appendix "Multiprogramming and Multiprocessing Examples" programming notes
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320
• A.6.1 Example of a Program Failure Using OR Immediate
• A.6.2 Conditional Swapping Instructions (CS, CDS)
• A.6.3 Bypassing Post and Wait
• A.6.4 Lock/Unlock
• A.6.5 Free-Pool Manipulation
• A.6.6 PERFORM LOCKED OPERATION (PLO)


... snip ...

and as above ... compare&swap has since been augmented with the "PERFORM LOCKED OPERATION (PLO)" instruction

described here:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/7.5.107?DT=20040504121320

collected past posts mentioning multiprocessor and/or CAS instruction
http://www.garlic.com/~lynn/subtopic.html#smp

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Designing database tables for performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Designing database tables for performance?
Newsgroups: comp.databases.theory
Date: Fri, 23 Feb 2007 16:48:21 -0700
paul c <toledobythesea@oohay.ac> writes:
I don't remember when I first heard or read the term "logical I/O". It might have been the in the early 1970's when IBM's VSAM access method first hit the streets. I'm pretty sure it was current in some circles then. Codd had written his first papers then but practically nobody in industry was even aware of them as IBM was pushing IMS and Vandl hard and people who knew him then told me later that there were big marketing forces at IBM that made working at the same company very difficult for him both personally and professionally. The term is a very unfortunate one since I'm sure misleads many newcomers to IT. As we can see here, it misleads many others.

(Anybody who was programming then has an historical advantage over younger people because it is so much easier to see what a revolution Codd started.)


original relational/sql implementation was system/r done all on vm370 ... misc. collected posts mentioning system/r activity:
http://www.garlic.com/~lynn/submain.html#systemr

recent thread mentioning various things from system/r days:
http://www.garlic.com/~lynn/2007d.html#4 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#6 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#8 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#33 Jim Gray Is Missing

vm370 was a follow-on to cp67 ... which had implemented both virtual memory and virtual machines in the mid-60s ... done at the cambridge science center. some of the people from CTSS had gone to the science center on the 4th flr of tech sq (worked on cp67, cms and other things)
http://www.garlic.com/~lynn/subtopic.html#545tech

GML (precursor to sgml, html, xml, etc) was also invented at the science center in 1969.

others from CTSS went to the 5th flr and worked on multics. the multics group managed to bring out the first commercial relational database product (MRDS):
http://www.multicians.org/mgm.html#MRDS

it was into the 80s before tech. transfer from SJR to Endicott succeeded with SQL/DS product ... and even longer for tech. transfer of SQL/DS from Endicott back to STL for DB2.

when virtual memory for 370 was announced, for whatever reason, they chose the term "virtual storage" (instead of virtual memory) ... from that comes dos/vs, vs1, vs2, svs, mvs, vsam, etc (all the "VSs")

various past posts mentioning quote from the mid-60s about "A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well"
http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
http://www.garlic.com/~lynn/2001h.html#26 TECO Critique
http://www.garlic.com/~lynn/2002.html#42 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
http://www.garlic.com/~lynn/2006i.html#30 virtual memory

lots of collected posts mentioning virtual memory, demand paging, and replacement algorithms (virtual memory and/or various kinds of "caches")
http://www.garlic.com/~lynn/subtopic.html#wsclock

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Sat, 24 Feb 2007 07:33:36 -0700
jmfbahciv writes:
If you haven't heard about it, you might find the news about Stop&Shop grocery stores last week. Not the strike but the ones about some transmitter put into the card swipe machines if they were movable. (I didn't understand this one.)

isn't so much if they are movable ... but how easy it is to replace the original with one that has been compromised (say a couple minutes of fiddling at a empty/vacant check lane)

i do a weekday morning small mailing list of news URLs ...

small sample from yesterday morning, stop&shop hasn't been getting nearly the play that TJX has received
Data Thieves Hit Stop & Shop
http://www.consumeraffairs.com/news04/2007/02/stop_n_shop.html
Update: TJX says Data breach worse than previously believed
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9011655
Customer Data Breach Began in 2005, TJX Says
http://www.washingtonpost.com/wp-dyn/content/article/2007/02/21/AR2007022102039.html
TJX: Data breach worse than previously believed
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9011655&intsrc=hm_list
Retailers to Hold Data-Breach Bag?
http://www.internetnews.com/bus-news/article.php/3661516
TJX Reveals Further Data breaches
http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=117223242421320215117&block=
Mass. bill wants stores to pay more in data breaches
http://news.com.com/Mass.+bill+wants+stores+to+pay+more+in+data+breaches/2100-7348_3-6161536.html
Mass. bill wants stores to pay more in data breaches
http://news.zdnet.com/2100-1009_22-6161536.html
TJX security breach fears grow
http://www.theregister.co.uk/2007/02/22/tjx_security_breach/
TJX Data Breach Worse than Previously Believed
http://www.pcworld.com/article/id,129291-c,privacysecurity/article.html
Data Breach Hits Close to Home
http://blog.washingtonpost.com/securityfix/2007/02/johns_hopkins_data_breach_stri_1.html?nav=rss_blog
Mass. Bill Would Make Retailers Pay for Data breaches
http://blog.washingtonpost.com/securityfix/2007/02/bill_would_make_retailers_pay.html?nav=rss_blog
Pharming Attack Slams 65 Financial Targets
http://www.informationweek.com/showArticle.jhtml?articleID=197008230
Bearing the Cost of Stolen Data - The Checkout
http://blog.washingtonpost.com/thecheckout/2007/02/bearing_the_cost_of_stolen_dat.html?nav=rss_blog
TJX: Data Theft Began in 2005; Data Taken from 2003
http://www.baselinemag.com/article2/0,1540,2097672,00.asp
ID Security: Is That Really You? ID Theft and Multifactor Authentication, Part 1
http://www.technewsworld.com/story/OOdTQ59zkfNJj1/Is-That-Really-You-ID-Theft-and-Multifactor-Authentication-Part-1.xhtml


... snip ...

a few stop & shop items from earlier in the week
Stop & Shop PIN Pads Breached As Conn. Removes Worker Data From Site
http://www.informationweek.com/showArticle.jhtml?articleID=197007473
Stop & Shop's PIN Pad Breach Follows Similar Cases in Canada
http://www.digitaltransactions.net/newsstory.cfm?newsid=1254
Stop & Shop(lifters) swipe card data
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9011580&intsrc=hm_list
Stop & Shop acknowledges security breach
http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1244190,00.html
Stop & Shop reports credit data was stolen
http://www.boston.com/business/globe/articles/2007/02/19/stop__shop_reports_credit_data_was_stolen/
Data Thieves Hit Stop & Shop
http://www.consumeraffairs.com/news04/2007/02/stop_n_shop.html


... snip ...

and of course, my oft referenced security proportional to risk posting
http://www.garlic.com/~lynn/2001h.html#61

and lots of past posts in this thread:
http://www.garlic.com/~lynn/2006y.html#7 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2006y.html#8 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#0 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#6 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#28 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#60 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#61 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#62 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#64 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#6 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#8 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#10 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#15 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#17 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#18 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#22 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#27 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#28 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#30 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#31 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#32 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#33 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#35 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#36 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#37 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#38 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#39 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#40 Point-of-Sale security
http://www.garlic.com/~lynn/2007c.html#43 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#44 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#46 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#51 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#52 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#53 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#0 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#5 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#11 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#68 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#70 Securing financial transactions a high priority for 2007

The Genealogy of the IBM PC

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Genealogy of the IBM PC
Newsgroups: alt.folklore.computers
Date: Sat, 24 Feb 2007 08:27:28 -0700
Charlton Wilbur <cwilbur@chromatico.net> writes:
More succinctly: the thing that made the IBM PC so wildly successful was (and is) the thing that's making it so unprofitable.

The IBM PC was easily duplicated, and by the mid-1980s was widely cloned. IBM had pretty much lost control by 1987 or so when the PS/2 line of computers came out: they attempted to regain control by introducing a new, incompatible expansion bus, but the market fairly decisively went the other way to PCI.


I've repeatedly claim that PC had to break into the general market first ... and out of the hobbyist/techno/nerd market segment. there was huge chicken & egg problem. why would anybody (in the general market) pay out quite a bit of money for something they never bought before, never seen before and had no real idea what it was.

Until there was substantial market base and market demand ... the clone makers weren't going to apply all the commoditization to even further fuel the market.

my brother use to be regional marketing rep for apple ... and i attended a couple of dinners with some of the mac developers (before the mac was announced) and we would have this argument/discussion (what was needed for personal computing device to break into the general market).

as before, my claim has been that 3270 terminal emulation ... was a relatively no-brainer financial decision. person (or corporation) could decide to pay-out money for something they had already to decided to buy anyway ... and get a model with a few extra bells & whistles ... these "extras" (that except for a small number of computer nerds) nobody had any idea what it was ... desktop/personal computing ... however, it appeared to come as a nearly no-cost, no-risk add-on.

once a large number of these devices were out there ... and same of the general public had experience using them (as corporate terminals) and had also got exposed a little to what desktop computing seemed to be ... it became much easier for somebody in the general public (as opposed to the computer nerd community) to make a decision to pay out significant amount of money.

recent posts on this subject
http://www.garlic.com/~lynn/2007d.html#43 Is computer history taugh now?
http://www.garlic.com/~lynn/2007d.html#44 Is computer history taugh now?
http://www.garlic.com/~lynn/2007d.html#50 Is computer history taugh now?
http://www.garlic.com/~lynn/2007d.html#56 Is computer history taugh now?
http://www.garlic.com/~lynn/2007d.html#59 Is computer history taugh now?

The Genealogy of the IBM PC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Genealogy of the IBM PC
Newsgroups: alt.folklore.computers
Date: Sat, 24 Feb 2007 09:05:00 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
my brother use to be regional marketing rep for apple ... and i attended a couple of dinners with some of the mac developers (before the mac was announced) and we would have this argument/discussion (what was needed for personal computing device to break into the general market).

re:
http://www.garlic.com/~lynn/2007e.html#3 The Genealogy of the IBM PC

Jim Gray, some others and I had gone down a similar path a couple years earlier about how to attract the general corporate community to (CMS online) personal computing ... and out of the mostly small number of programmers (i.e. from small tens of thousands to rest of the corporation, couple hundred thousand).

Obvious was to try and promote internal network and electronic mail communication ... and referenced in this post
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray is Missing

a late night session we came up with providing online telephone books ... which could appeal to corporate executives.

There was an interesting "tipping" point that snowballed. It had been that there was semi-annual budget process that included allocation of 3270s terminal for internal use. Individuals wanting their own desktop 3270 needed vp-level executive approval (that had to come out of the specific organization's allocation). There was this one quarter where it became known that a small number of top corporate executives had started using email for communication. This was observed by the mid-level executives ... which created an immediate demand for 3270 terminals for doing CMS email (mostly in the form of PROFS). There was six month period which saw nearly the complete internal allocation of 3270s for programmers and engineers (to do their job) being preempted and redirected by executives ... because it became the "in-thing" for executives to have email. Actually, normally two 3270 terminals were needed, one for the executive (so they could claim they had online email) and one for their secretary (for the most part was the person that actually handled the email).

It was also in that six month period that I did the business case justification which showed that the 3-year fully amortized cost of 3270 terminal was nearly identical to the monthly cost of a business phone that was standard for each employees desk. That business phones were supplied as a normal matter (w/o requiring vp sign-off) ... so why couldn't 3270 terminals also be supplied for every employees desk (w/o needing an executive to approve every individual 3270).

Is computer history taugh now?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taugh now?
Newsgroups: alt.folklore.computers
Date: Sat, 24 Feb 2007 11:10:10 -0700
krw <krw@att.bizzzz> writes:
No Fairchild was the manufacturer of the test equipment. The VAX was purchased through Fairchild, along with hardware and software, to control the test floor. IBM owned the hardware. The division was rather clear. For various reasons we didn't want to (but were forced to, in the end) purchase a service contract for the hardware. Fairchild suggested we buy the contract (less the 20% uplift) directly from DEC, but they refused our querys. Blue money was no good.

At least we could get Tektronix to service our PDP-11s. They had to call DEC once and that was a mess too, but was resoled.


there was also the tektronix "3277-GA" ... i.e. tektronix display that hung off the side of 3277 terminal.

the "instrument division" (in conn) sold a 68k lab machine ... old passing reference
http://www.garlic.com/~lynn/2003b.html#5 Card Columns

a reference here
http://www.old-computers.com/museum/computer.asp?st=1&c=623

there are a few web pages that mention boca had considered 68k for acorn ... but among other things, motorola couldn't commit to the volumes.

some old email ... this talks about both the TSS "unix" prpq for at&t ... i.e. stripped down TSS kernel with UNIX layered on top ... as well as a reference to early work on what became xt/370:

Date: 80/04/04 10:04:35
To: wheeler

what i heard yesterday from a fellow in communications industry marketing who is the connection between ibm and the labs prpq people is that they expected to be done by august, and will probably be done by the end of the year instead. XXXXXX from bell said they hadn't started writing code yet at share. the fellow at amdahl is named YYYYYY. by the way, i think they are seriously considering doing it.. they want the benefits of a relatively large address space and nonflakey hardware. the tss supervisor isn't a terribly large price to pay for this, and they feel comfortable with it, being one of the last strongholds of tss anyway.

what the fellow from comm ind was interested in was whether there was a c compiler anywhere within ibm. answer, no. he said several people have expressed interest to him, including the people in endicott who are using 68000 to emulate 360/370, and want to write microcode in c. some recent developments lead me to believe that we can do a unix port effort somewhere within ibm around now.. the time is ripe, lots of people are interested and we have some unix expertise around.

will send more mail on this later.


... snip ...top of post, old email index

a few other old emails on the tss prpq (for unix) subject
http://www.garlic.com/~lynn/2006b.html#email800310
http://www.garlic.com/~lynn/2006t.html#email800327
http://www.garlic.com/~lynn/2006f.html#email800404
http://www.garlic.com/~lynn/2007b.html#email800408

lots of past posts on washington (code name for xt370)
http://www.garlic.com/~lynn/94.html#42 bloat
http://www.garlic.com/~lynn/96.html#23 Old IBM's
http://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
http://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
http://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
http://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
http://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
http://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
http://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
http://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
http://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
http://www.garlic.com/~lynn/2003h.html#40 IBM system 370
http://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
http://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
http://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s
http://www.garlic.com/~lynn/2006j.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006m.html#56 DCSS
http://www.garlic.com/~lynn/2006n.html#5 Not Your Dad's Mainframe: Little Iron
http://www.garlic.com/~lynn/2006n.html#14 RCA Spectra 70/25: Another Mystery Computer?
http://www.garlic.com/~lynn/2006y.html#29 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2006y.html#30 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007d.html#7 Has anyone ever used self-modifying microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#25 modern paging

GCN at 25: VAX for the memory

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: GCN at 25: VAX for the memory
Newsgroups: alt.folklore.computers
Date: Sat, 24 Feb 2007 11:41:59 -0700
GCN at 25: VAX for the memory
http://www.gcn.com/print/26_04/43159-1.html

from above:
With a 32-bit architecture and the use of virtual memory to manage its address space, the VAX (which stood for Virtual Address eXtension) was a pioneer when the first one was released in 1977. By 1988, it had become so common at agencies that GCN devoted five pages to a Buyers Guide ( 11 vendors, 124 products) on add-in memory for 'your VAX computer.' Yes, you too could add 8 MB for a mere $3,395, give or take a couple hundred depending on the product.

... snip ...

dare i mention 360/67 shipping ten years earlier? ... supported 32bit (and 24bit) virtual memory addressing; slight topic drift:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

other slight topic drift
http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance

A way to speed up level 1 caches

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Sat, 24 Feb 2007 16:20:10 -0700
Maynard Handley <name99@name99.org> writes:
Each L1 cache is now two identical such caches. Which one is used is gated by whether the system is in user or system mode. This is a simple extra bit line, so doesn't hurt your cycle time, unlike doubling associativity to double the cache size. Now we get the system material segregated in its world, the user material segregated in its world, and the two aren't stumbling over each other.

we claimed about something similar in the 168 (and follow-ons) ... most of the stuff for virtual memory ... at least with the early virtual memory system started at zero and grew upwards. starting at least with 168, they chose the "8mbyte" virtual address bit for one of the indexes. the issue was that the mainline batch system of the time was MVS with 24bit (16mbyte) virtual address spaces ... with the kernel occupying the first 8mbytes of every (application) virtual address space ... nominally leaving 8mbytes in every application virtual address space for application code.

Purposefully choosing the 8mbyte virtual address bit for one of the index bits ... resulting in partitioning half for application code and half for kernel code. there was some amount of complaints from other operating systems that didn't have that way of organizing code.

Actually for the 168 ... where the cache was real address indexed ... it only applied to TLB entries ... and not the processor cache. You didn't start having virtual address indexing for cache entries until 3090 ... although it was a dual-index scheme discussed in an old email
http://www.garlic.com/~lynn/2003j.html#email831118

in general, all studies i've seen or done ... when all other things being equal ... a global cache strategy is more efficient than a partitioned cache strategy. The exceptions are where processing is ongoing and distinct ... like strategies involving separate instruction and data cache. The issue is execution patterns can be quite bursty ... and the downside of a partitioned cache strategy is the reduced cache hit rate because some piece of code that is heavily executing at a specific moment could have made use of the extra cache that isn't available in a partitioned strategy. lots of past posts discussing replacement algorithms ... frequently contrasting global LRU (unpartitioned) vis-a-vis local LRU (partitioned) strategy.
http://www.garlic.com/~lynn/subtopic.html#wsclock

an issue is whether or not kernel/application execution patterns are frequently intermixed to the point that eliminating their interaction provides some benefit. another way of dealing with an aspect of this that I played with in the mid-70s was changing the dispatching of applications to be disabled for asynchronous i/o interrupts (for limited periods) ... as a mechanism for dealing with random (kernel) processing of asynchronous interrupts(impacting application cache hit ratios). there was some tuning trade-offs regarding responsiveness of delaying interrupt processing vis-a-vis improved cache hit ratios. in large systems, there were even situations where delayed interrupt processing tended to batch the handling of several interrupts ... improving both kernel processing cache hit ratio at the same time improving application cache hit ratio; slight delays even improved timeliness of overall interrupt processing since both kernel and application thruput improved. Basically it made kernel and application cache use take turns ... but if there was no kernel processing to be done ... there was no turn to take and application code could continue to use the whole cache. In that sense, it was more dynamically adaptive compared to any fixed physical partitioning implementation.

aka it only showed an improvement if they were actually stumbling over each other. if they weren't actually stumbling over each other ... then there was no improvement ... but there was also no penalty.

for other drift ... I have written a fairly large application ... and for some amount of processing ... the application runs nearly twice as fast on 1.7ghz pentium-M with 2mbyte processor cache as it does on a 3.4ghz pentium-4 with 512k processor cache (i.e. the reverse of what one might expect purely based on processor cycle time).

ISA Support for Multithreading

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISA Support for Multithreading
Newsgroups: comp.arch,comp.arch.embedded
Date: Sat, 24 Feb 2007 23:03:10 -0700
"Chris Thomasson" <cristom@comcast.net> writes:
Just to clarify, DWCAS == CDS... BTW... Do you why the doubleword version of CAS was created? Did its ability to implement many different kinds of lock-free algorithms influence any decisions?

previous post:
http://www.garlic.com/~lynn/2007d.html#61 ISA Support for Multithreading
http://www.garlic.com/~lynn/2007r.html#0 ISA Support for Multithreading

see the referenced programming notes for CDS example

example for "free-pool manipulation" using CDS
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.5?SHELF=DZ9ZBK03&DT=20040504121320

bitsaver has copy of 370 principle of operations ga22-7000-4, 1sep75
http://www.bitsavers.org/pdf/ibm/370/princOps/GA22-7000-4_370_Principles_Of_Operation_Sep75.pdf

with basically has identical CDS free-pool manipulation description

The Genealogy of the IBM PC

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Genealogy of the IBM PC
Newsgroups: alt.folklore.computers
Date: Sun, 25 Feb 2007 09:38:30 -0700
"Quadibloc" <jsavard@ecn.ab.ca> writes:
Making the IBM an open system didn't *necessarily* invite cloning; IBM did *not* expect the BIOS to be imitated quite as quickly as it was.

re:
http://www.garlic.com/~lynn/2007e.html#3 The Genealogy of the IBM PC
http://www.garlic.com/~lynn/2007e.html#4 The Genealogy of the IBM PC

if it was being thot of as personal computing market ... it would probably be unlikely that it would be thot of as a several billion per annum cloning opportunity ... however if it was thot of as a 3270 dumb terminal with lots of bells and whistles ... then the whole attitude would be changed (cloners would see a significantly different business case).

clones were well established part of mainframe market all thru the 70s ... it was tens of billions per annum. i've even mentioned that there was some write-up blaming a project that i was part of as an undergraduate in the 60s (with 3 others) starting the whole thing. lots of past posts about doing a mainframe controller clone as an undergraduate in the 60s.
http://www.garlic.com/~lynn/subtopic.html#360pcm

there was sort of expectation that any mainframe thing would be cloned within six months of product availability ... even things enormously more complex than BIOS or the whole PC.

there was theft of trade-secret case that i've mentioned before circa 1980. damages were claimed for a couple billion ... based on six month difference in clone revenue ... i.e. being able to market clone same day as mainframe product availability as opposed to starting to ship a clone six months later.

for a little drift ... one of the things that the judge brought up in the case ... given that it was worth at least a couple billion ... could it be demonstrated that the security (to prevent such theft) was proportional to the value of what was stolen.

misc. past posts industrial espionage and attractive nuisance (if you left a couple billion laying around ... people couldn't be blamed for trying to walk away with it)
http://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
http://www.garlic.com/~lynn/2005f.html#60 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
http://www.garlic.com/~lynn/2006q.html#36 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006r.html#29 Intel abandons USEnet news

A way to speed up level 1 caches

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Sun, 25 Feb 2007 10:08:16 -0700
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
We had a performance problem, and I analysed IBM's cache measurements on the 370/165. The data were inadaequate to draw many conclusions, but that aspect was one where I felt that there was a possibility of improvement. The cache hit rate in system code was low enough that we might have got better performance overall if it had bypassed the cache entirely.

re:
http://www.garlic.com/~lynn/2007e.html#7 A way to speed up level 1 caches

one might conjecture that entry into the kernel was somewhat unpredictable and random ... especially for things like asynchronous i/o interrupts .. and the processing was straight-thru the kernel with little "re-use" (very little locality of use). then the main effect of kernel execution on cache operation would be to flush application stuff out of the cache.

a much more glaring example of something similar was when the 3880-13 track cache was introduced in the early 80s. they claimed a 90% hit rate for typical i/o operations to 3380. my counter claim (which i took some heat for pointing out) was that sequentially reading a 3380 ... which typically had ten records per track ... would result in "miss" on the read for the first record on a track ... and then a "hit" for reading the remaining nine records on a track (90 percent cache hit rate). I claimed that a much simpler full-track buffer would achieve the same benefit ... since no records were actually being "re-used" once they were in the cache.

my observation about running applications disabled for i/o interrupts ... tended to somewhat minimize randomness of kernel flushing application stuff out of cache ... and also created small increase in probability of kernel cache re-use if multiple pending i/o interrupts were processed in batch/burst.

... aka caches (and lru replacement algorithms) have assumption that something used in recent history is the most likely to be (re-)used in the future. there are significant kinds of patterns ... like various kinds of sequential and/or straight-through processing that may not correspond to that.

A way to speed up level 1 caches

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Sun, 25 Feb 2007 13:17:38 -0700
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Precisely. Whether that was so, I can't say. But it matched the data I had, and is definitely a plausible hypothesis. It might well be true today.

re:
http://www.garlic.com/~lynn/2007e.html#7 A way to speed up level 1 caches
http://www.garlic.com/~lynn/2007e.html#10 A way to speed up level 1 caches

one of the places where partitioned caches show benefit ... is limiting the damage that some operation that has sequential/serial, non-reuse pattern activity might have on other activity using the cache (it wasn't that it improved good cache activity ... it just bounded the effects of bad cache activity).

another "kernel" example has been tcp/ip stack kernel implementations. some of the implementations have had large number of buffer copy operations ... and w/o special non-caching support for data copies ... easily wipes out any useful stuff in the cache(s) (especially when dealing with larger buffer sizes). the processor cycles involved in buffer copy cache effects can turn out to be larger than the processor cycles involved in direct instruction execution.

one of the suggested benefits for posix asynchronous i/o ... besides enabling various multi-threaded operations ... was being able to set up direct i/o transfer into/out-of application space buffers ... avoiding all buffer copy operations. this somewhat wanders into the recent thread on multithreading
http://www.garlic.com/~lynn/2007d.html#61 ISA Support for Multithreading
http://www.garlic.com/~lynn/2007e.html#0 ISA Support for Multithreading
http://www.garlic.com/~lynn/2007e.html#8 ISA Support for Multithreading

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Sun, 25 Feb 2007 14:02:03 -0700
jmfbahciv writes:
Oh! Replacement. The reports didn't say that; the reports said that a chip or something had been added to the original device. It just make any sense about the reported requirement that only movable scanners were affected.

it could have been "an" original device ... as opposed to "the" original device. it is somewhat easier to take a device apart and insert skimmer/recorder and some sort of wireless transmitter ... offsite ... and then swap devices. they could both be original devices ... the issue is whether the actual compromise takes place in real-time or offsite at some other location (and only the swap has to be performed in real-time). Many of the devices have some degree of armoring ... making real-time compromise somewhat more problematical.

However, if the armoring was minimal and there was trivial access permitting insertion of recorder/transmitter in real time ... bolting the machine might make such access more difficult (as opposed to being countermeasure to swapping "original" devices ... where one has been compromised offsite and swapped for one that hasn't "yet" been modified).

re:
http://www.garlic.com/~lynn/2007e.html#2 Securing financial transactions a high priority for 2007

for other drift ...

Fraud, embezzling and financial crime
http://business.scotsman.com/topics.cfm?tid=946
Chip and pin fails to halt card fraud rise
http://edinburghnews.scotsman.com/edinburgh.cfm?id=291732007
Card-skim criminals have police stumped
http://www.portsmouthtoday.co.uk/ViewArticle.aspx?ArticleID=2075455&SectionID=455
Plans to cut card fraud 'too complex'
http://www.itnews.com.au/newsstory.aspx?CIaNID=46197&src=site-marq
Plans to cut card fraud 'too complex'
http://www.itweek.co.uk/vnunet/news/2183738/plans-cut-card-fraud-slammed
Plans to cut card fraud 'too complex'
http://www.whatpc.co.uk/vnunet/news/2183738/plans-cut-card-fraud-slammed Warnings over 'complicated' anti-fraud card systems
http://www.tuvps.co.uk/news/articles/warnings-over-complicated-anti-fraud-card-systems-18065845.asp

a possible claim from the x9a10 financial standard working group perspective was that there was a failure to perform end-to-end threat analysis. in the mid-90s time-frame that some of these things were being invented ... the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments ... resulting in x9.59 standard
http://www.garlic.com/~lynn/x959.html#x959

rather than coming up with a variety of piece-meal point solutions (none of which adequately addressed all the threats), provide a comprehensive end-to-end solution.

time spent/day on a computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Mon, 26 Feb 2007 09:04:06 -0700
Roger Blake <rogblake10@iname10.com> writes:
I've been working in data processing for about 30 years now, which makes me a newbie compared to many here. I've already reached the point where I don't code any more, aside from shell scripts and small utilities for my own use. (Have transitioned from software development to network administration and maintenance.)

i got terminal at home in mar70 and have had online connectivity at home ever since ... of course back then it was only 134.5 baud.

recent reference to programming
http://www.garlic.com/~lynn/2007d.html#25 modern paging
http://www.garlic.com/~lynn/2007e.html#7 A way to speed up level 1 caches

Cycles per ASM instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cycles per ASM instruction
Newsgroups: comp.lang.asm370
Date: Mon, 26 Feb 2007 09:44:55 -0700
Sparky Spartacus <Sparky@universalexports.org> writes:
Thus taking an enormous hit for using ISAM, a true hog.

ISAM sprang out of the same philosophy as the design of VTOC and PDS ... which used multi-track search to find members; basically I/O resources were traded off to compensate for real memory resource constraint ... a serious issue in the 60s.

The trade-off of using I/O resources to compensate for constrained real memory continued for VTOC/PDS long after the trade-off had shifted by at least the mid-70s.

Basically the index for "finding" what you wanted was stored on the disk ... and channel programs were built that could scan the on-disk structures looking for target information. ISAM had structures that were more complex that the simple linear search used for VTOC/PDS ... being able to fetch information from disk that would change the channel (i/o) program "on the fly" (somewhat the equivalent of self-modifying instruction streams ... which were frequently done in the same period ... compensate for constrained real storage).

In the case of VTOC/PDS implementation ... the profligate "burning" of i/o resources continued long after primary system bottleneck had shifted from being real storage constrained to being I/O resource constrained ... and frequently could represent an enormous system bottleneck ... collected past posts mentioning CKD and VTOC/PDS implementation representing a serious system system thruput bottleneck (as system configurations changed from being primarily real storage constrained to being primarily i/o constrained)
http://www.garlic.com/~lynn/submain.html#dasd

in the early 80s i got into a little trouble with the disk development organization ... in the late 70s ... I started making statements about system configurations shifting from being memory constrained to being disk constrained. Part of this was noting that disk relative system thruput had declined by better than an order of magnitude over a period of years (cpu & memory got 50 times bigger/faster, while disks only got 3-5 times faster) ... as a result there was a significant requirement to change implementation paradigms to use memory to compensate for disk thruput (caching and other paradigm changes). the disk division assigned their performance organization to refute my statements. However after several weeks, the group came back and noted that I had actually somewhat understated the situation. a few past posts mentioning the subject:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
http://www.garlic.com/~lynn/2006o.html#27 oops
http://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After Multi-Core?

For a slightly different drift ... some number of posts mentioning the original relational/sql implementation, System/R
http://www.garlic.com/~lynn/submain.html#systemr

in the system/r time-frame there was some number of arguments about benefits/trade-offs between the IMS people in STL and the System/R people in SJR. IMS used direct record pointers embedded as part of the information. System/R used a index structure built "under the covers" to abstract away direct record pointers from explicit representation as part of the data. IMS pointed out that the System/R index structure typically doubled the physical disk requirements ... and significantly increased the number of disk I/Os (as part of processing the index). System/R people pointed out the enormous manual costs maintaining the exposed record pointers. effectively in the 80s, with ever increasing real storage availability, it was possible to cache the majority of the RDBMS index ... eliminating the costly disk I/O overhead processing the index ... and the significant increase in disk capacities and significant increase in disk price/mbyte ... somewhat made the issue of the size of the on-disk index structure mute. This changed the trade-off of increased System/R computing resource requirements vis-a-vis the significant manual resource requirements for IMS ... making RDBMS implementations a lot more attractive for many people (although there are claims that still today there may be more mbytes stored in IMS databases than RDBMS databases).

misc. past posts mentioning the the IMS/RDBMS discussions
http://www.garlic.com/~lynn/2004e.html#22 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
http://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
http://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
http://www.garlic.com/~lynn/2004q.html#79 Athlon cache question
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
http://www.garlic.com/~lynn/2006w.html#27 Generalised approach to storing address details

for other drift, a couple other posts about that period
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing

for other drift ... recent thread on self-modifying instruction streams
http://www.garlic.com/~lynn/2007d.html#1 Has anyone ever used self-modifying microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#3 Has anyone ever used self-modifying microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#7 Has anyone ever used self-modifying microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#9 Has anyone ever used self-modifying microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#46 Has anyone ever used self-modifying microcode? Would it even be useful?

The Genealogy of the IBM PC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Genealogy of the IBM PC
Newsgroups: alt.folklore.computers
Date: Mon, 26 Feb 2007 10:57:52 -0700
Walter Bushell <proto@panix.com> writes:
This was before glass TTY's, or at least when they became widely available, early 70's. I didn't become aware of glass TTY's until the mid 80s. Too new fangled for me, you have to relearn all your habits, which I had to do anyway for the leap to micros.

some old email mentioning topaz/3101 ... it replaced a cdi miniterm (thermal paper) for home terminal
http://www.garlic.com/~lynn/2006y.html#email791011b,
http://www.garlic.com/~lynn/2006y.html#email791011,
http://www.garlic.com/~lynn/2006y.html#email800301,
http://www.garlic.com/~lynn/2006y.html#email800311,
http://www.garlic.com/~lynn/2006y.html#email800312,
http://www.garlic.com/~lynn/2006y.html#email800314,
http://www.garlic.com/~lynn/2006y.html#email810820,

an general past posts mentioning 3101
http://www.garlic.com/~lynn/99.html#69 System/1 ?
http://www.garlic.com/~lynn/2000g.html#17 IBM's mess (was: Re: What the hell is an MSX?)
http://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
http://www.garlic.com/~lynn/2001b.html#13 Now early Arpanet security
http://www.garlic.com/~lynn/2001h.html#32 Wanted: pictures of green-screen text
http://www.garlic.com/~lynn/2001m.html#1 ASR33/35 Controls
http://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2003c.html#34 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003c.html#35 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
http://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2005p.html#28 Canon Cat for Sale
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2006n.html#56 AT&T Labs vs. Google Labs - R&D History
http://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
http://www.garlic.com/~lynn/2006y.html#4 Why so little parallelism?
http://www.garlic.com/~lynn/2006y.html#24 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"

Attractive Alternatives to Mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Attractive Alternatives to Mainframes
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 26 Feb 2007 12:19:40 -0700
Dave Kopischke wrote:
Because I want to know if it worked or not. As you imply, it will be next to impossible to get an objective answer, but SIAC is a high visibility company and process within the industry. They won't fail without someone knowing it.

when we were doing ha/cmp ... we talked quite a bit to SIAC, which was using a number of tandem computers at the time ... misc posts mentioning ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

in that period, i had been asked to write part of the corporate continuous availability strategy document (based on ha/cmp work we were doing in geographic survivability) ... however both Rochester and POK non-concurred with what I had written ... and it got pulled (at the time, they didn't have any offerings that could meet the criteria); we had coined the terms disaster survivability (as an alternative to disaster/recover) and geographic survivability ... misc. posts on geographic operation and continuous availability
http://www.garlic.com/~lynn/submain.html#available

later we also talked to one of the big financial transaction operations running IMS hot-standby in a geographic separated operation ... they attributed their 100 percent availability over a period of several years to 1) IMS hot-standby and 2) automated operator (i.e. people make mistakes)

for a little drift ... my wife had been con'ed into serving a stint in POK in charge of loosely-coupled architecture where she originated peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

except for IMS hot-standby, the architecture saw very little uptake until sysplex (one of the reasons she didn't stay in that position for very long).

for small amount of other IMS topic drift ... in this old email
http://www.garlic.com/~lynn/2007.html#email801016

in this post
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"

for other topic drift, collection of email discussing ha/cmp scale-up
http://www.garlic.com/~lynn/lhwemail.html#medusa

A way to speed up level 1 caches

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Mon, 26 Feb 2007 12:33:08 -0700
"Stephen Sprunk" <stephen@sprunk.org> writes:
True. Disk data should only arrive in the amount you ask for, when you ask for it. Network data can arrive in any size and at any time. However, on POSIX systems the two look about the same once you've created an fd; that's one of the downsides to UNIX's "everything's a file" philosophy.

cms had similar problem running under cp67 in the mid-60s ... basically cp67 kernel scanned the channel programs making a "shadow" version (with real addresses) and fixing the appropriate pages ... and the "shadow" channel program was what actually got executed. later in the early 70s, I did an enhancement for the cms filesystem that used page mapping capability ... that eliminated most of that overhead (and the related page fixing) ... since the operations mapped directly to page transfers (which were aligned). misc. past posts mentioning implementing paged mapped filesystem in the early 70s
http://www.garlic.com/~lynn/submain.html#mmap

there was some speculation about moving the channel program translation outboard to the channels with the introduction of 370s (in the early 70s). there had been a recently issued patent on "virtual" channels (in the late 60s) ... however, production channel hardware never actually shipped.

with regard to tcp/ip and buffer overrun ... it is actually only a problem for incoming data ... you can still easily use scatter/gather technique for outgoing data. then for incoming data ... you may select to trade-off between extremely high-performance transfers by forcing data to relatively structured organization ... vis-a-vis lower performance and less structured transfers.

Is computer history taught now?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taught now?
Newsgroups: alt.folklore.computers
Date: Tue, 27 Feb 2007 08:08:40 -0700
jmfbahciv writes:
Wouldn't there be a maximum number of levels before the organization stopped working efficiently? I think Lynn has talked about this and said that anything above four levels creates inertia.

i've posted about traditional 6-7 reports/manager having 14 levels for the size of the organization ... however a new business where some number of people transferred in 78-79, they tried to emulate the 14 level organization ... even tho there were only 2000 people; the result was a top heavy operation with half the organization having titles of at least "director" or above (that organization lasted until the mid-80s when it was dissolved):
http://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
http://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
http://www.garlic.com/~lynn/2004o.html#63 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2006m.html#17 Why I use a Mac, anno 2006

I also have mentioned that (at least part of) the company in the very early 90s ... significantly flattened the organization to 12-14 reports/manager ... resulting in large number of mid-level managers/executives having to look for other opportunities ... some number possibly finding new positions in Marlborough
http://www.garlic.com/~lynn/2007b.html#29 was: How many 36-bit Unix ports in the old days?

there was somewhat unrelated recent news item:

Meetings make us dumber, study shows
http://www.msnbc.msn.com/id/17279961/

there was a different thread about threat of gov. action can create business decision paralysis ... since everything has to be handled at very high level constantly trying to anticipate gov. response
http://www.garlic.com/~lynn/2001j.html#33 Big black helicopters
http://www.garlic.com/~lynn/2001j.html#51 Big black helicopters

there is a related issue ... also from Boyd ... about pushing decisions as low as possible (i.e. it isn't so directly the number of levels ... it is how far away from any direct involvement are decisions being made):
http://www.garlic.com/~lynn/99.html#120 atomic History
http://www.garlic.com/~lynn/2001.html#29 Review of Steve McConnell's AFTER THE GOLD RUSH
http://www.garlic.com/~lynn/2001.html#30 Review of Steve McConnell's AFTER THE GOLD RUSH
http://www.garlic.com/~lynn/2001m.html#16 mainframe question
http://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002d.html#38 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
http://www.garlic.com/~lynn/2002q.html#43 Star Trek: TNG reference
http://www.garlic.com/~lynn/2003h.html#51 employee motivation & executive compensation
http://www.garlic.com/~lynn/2003p.html#27 The BASIC Variations
http://www.garlic.com/~lynn/2004k.html#24 Timeless Classics of Software Engineering
http://www.garlic.com/~lynn/2004q.html#86 Organizations with two or more Managers
http://www.garlic.com/~lynn/2005e.html#3 Computerworld Article: Dress for Success?
http://www.garlic.com/~lynn/2006f.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
http://www.garlic.com/~lynn/2006q.html#41 was change headers: The Fate of VM - was: Re: Baby MVS???
http://www.garlic.com/~lynn/2007b.html#37 Special characters in passwords was Re: RACF - Password rules
http://www.garlic.com/~lynn/2007c.html#25 Special characters in passwords was Re: RACF - Password rules

Cycles per ASM instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cycles per ASM instruction
Newsgroups: comp.lang.asm370
Date: Tue, 27 Feb 2007 08:34:51 -0700
"robertwessel2@yahoo.com" <robertwessel2@yahoo.com> writes:
When you turned the option on, CP would basically look at channel programs to see if they looked like the ISAM ones that were self modifying, and would them simulate them separately. This was none too secure, at least in early versions of VM, where real self-modification was allowed on a partially virtualized channel program.

ref:
http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction

one of my first assignments after graduation and going to the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

was a week onsite at a customer account trying to get ISAM support working ... rewriting parts of CCWTRANS ... module in cp67 that scanned channel program and created "shadow" ... which was actually executed.

most of ISAM self-modifying wasn't actually modifying the CCWs but the seek arguments (arm position on the disk). CP67 in addition to translated the CCWs ... in part to handle virtual->real addresses ... recent thread in another n.g.
http://www.garlic.com/~lynn/2007e.html#17 A way to speed up level 1 caches

but also to convert the seek arguments for minidisks (i.e. minidisks where virtual cylinder zero was actually at some other cylinder position). Setting the "ISAM" option for a "full-pack" minidisk (i.e. all the virtual arm/seek positions were identical to the real arm/seek positions) ... would change CCWTRANS so that instead of using a "shadow" seek argument ... it used the (untranslated) seek argument located in the virtual address space (instead of translated/shadow seek argument).

in the standard shadow CCW translation, the seek argument was copied to a shadow location for use by the shadow seek CCW. When the ISAM channel program read a new seek argument into its virtual address space, a normal shadow channel program wouldn't see it; however an ISAM option (full pack minidisk) shadow channel program would have all the seek CCWs pointing at the seek arguments in the virtual address space (rather than a shadow version) and so would pick-up the dynamically read seek argument. There wasn't a "real" security issue for full-pack minidisk ... since there was no place the arm could be moved that would result in a position belonging to some other virtual machine.

There was another feature added for VTAM ... where VTAM was modifying a "live" channel program, on-the-fly ... as part of dynamic buffer allocation related to reading input from terminals ... where VTAM would execute a diagnose instruction (after modifying the channel program) as an indication for CP to retranslate the channel program, updating the shadow channel program (which is the real live running version). This characteristic was also somewhat referenced in the thread about speeding up level 1 caches ... how to (dynamically) handle an indeterminate amount of incoming data.

other posts in this thread:
http://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007d.html#63 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007d.html#71 Cycles per ASM instruction

and other posts in the thread about speeding up level 1 caches
http://www.garlic.com/~lynn/2007e.html#7 A way to speed up level 1 caches
http://www.garlic.com/~lynn/2007e.html#10 A way to speed up level 1 caches
http://www.garlic.com/~lynn/2007e.html#11 A way to speed up level 1 caches

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Tue, 27 Feb 2007 11:26:29 -0700
Morten Reistad <first@last.name> writes:
This is the problem with my bank's network banking changes. They used to have a 6-digit signing "mouse" for login and transcations, and otherwise stick to "dumb" https.I can then take action to use a secure machine; using e.g. lynx on an openbsd machine as the browser.

Now they 1) insist on windows 2) put trust in the OS by downloading a huge user side Java blob 3) Only have a single logon, no signing of transaction batches before payment 4) Botch certificate issuance, so I have to accept a certificate that appears "out of the blue".

I now have to change my bank. New cards are picked up friday. Old cards will be instructed to have a maximum limit of $100 for all payments. I will enjoy to see them implement that on the software side.

All of this is from adoption of "chip cards solves all" thinking, and lack of security and threat analysis.


there is some amount in the press about public being on the lookout for external overlays that perform skimming/recording operations in support of replay attacks (information used to create counterfeit cards) ... however, there are some newer generation of skimming/recording that are being placed inside the terminals (and therefor a lot harder to detect).

note that the "chip cards solves all" ... is the subject of the series of posts about yes card exploit
http://www.garlic.com/~lynn/subintegrity.html#yescard

where I once heard somebody comment about there having been billions of dollars spent proving that chips were less secure than magstripe. even some suggested countermeasure to chipcard counterfeit yes card vulnerability still may be subject to man-in-the-middle attacks ... misc. past posts mentioning various kinds of MITM-attacks
http://www.garlic.com/~lynn/subintegrity.html#mitm

also as a countermeasure to wide variety of compromises for (mostly personal/home) machines ... the EU had created the "finread" terminal standard
http://www.garlic.com/~lynn/subintegrity.html#finread

... included

1) external pinpad ... as countermeasure to keylogger + trojan horse (in the PC) recording the chipcard's PIN ... and possibly even replaying the PIN to the chipcard for a stealth transaction w/o the owner's knowledge and

2) external display ... as countermeasure to trojan horse representing one transaction on the (PC's) screen and presenting a totally different transaction to the chipcard.

the emerging (40yr old, new) virtualization (virtual machine) technology is being pushed as a cure ... but it can also be used as part of the disease. As part of a countermeasure/cure ... all downloads are kept in an isolated "sandbox" ... separated and partitioned from more secure operations. As part of the disease, infections that manage to get into the virtualization infrastructure can go totally undetected by all scanning/checking software (and from their stealth position perform all sorts of activities that leave no trace)

lots of posts mentioning all kinds of fraud, exploits, vulnerabilities, threats, risks, etc
http://www.garlic.com/~lynn/subintegrity.html#threat

recent, somewhat related thread ... comment here
http://www.garlic.com/~lynn/aadsm26.htm#38 Usable Security 2007
and also in this thread
http://www.garlic.com/~lynn/2007e.html#12 Securing financial transactions a high priority for 2007

that includes news article mentioning that "card and chip fails to halt card fraud rise"

and news item referenced here
http://www.garlic.com/~lynn/aadsm26.htm#39 Usable Security 2007

about changing gov. regulations where card & check fraud is no longer reported to the authorities ... but instead reported to the financial institution ... and then it is the responsibility of the financial institution to make any reports to the authorities.

and recent threads mentioning various kinds of ssl vulnerabilities
http://www.garlic.com/~lynn/aadsm26.htm#26 man in the middle SSL
http://www.garlic.com/~lynn/aadsm26.htm#27 man in the middle SSL
http://www.garlic.com/~lynn/aadsm26.htm#28 man in the middle SSL
http://www.garlic.com/~lynn/aadsm26.htm#30 man in the middle SSL
http://www.garlic.com/~lynn/aadsm26.htm#31 man in the middle SSL
http://www.garlic.com/~lynn/2007d.html#35 MAC and SSL
http://www.garlic.com/~lynn/2007d.html#36 MAC and SSL
http://www.garlic.com/~lynn/2007d.html#37 MAC and SSL
http://www.garlic.com/~lynn/2007d.html#38 Question on Network Security
http://www.garlic.com/~lynn/aadsm26.htm#32 Failure of PKI in messaging
http://www.garlic.com/~lynn/aadsm26.htm#33 Failure of PKI in messaging
http://www.garlic.com/~lynn/aadsm26.htm#34 Failure of PKI in messaging
http://www.garlic.com/~lynn/aadsm26.htm#35 Failure of PKI in messaging

A way to speed up level 1 caches

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Tue, 27 Feb 2007 11:37:02 -0700
Jan Vorbr├╝ggen <jvorbrueggen@not-mediasec.de> writes:
Such pages are locked down for good, until the I/O completes - nothing at all is allowed to change in that period of time. VMS did it that way, and got it right (well, mostly 8-)) from day 1. Reference counting, and all that. Oh, and when the reference count finally goes to zero, the delayed action(s) finally take place. (And that's where most of the bugs were/are.)

over the years ... I got to rewrite sections of that code in the cp (virtual machine) kernel 3 or 4 times ... to fix bugs and other features that would creep in during the intervening years ... starting first with cp67 in the 60s.

then later when I got to release the resource manager for vm370 ... well over half the code in the resource manager ... wasn't actually involved in managing resources ... but major restructuring of the whole kernel serialization operation. misc. posts mentioning resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare
which also included a bunch of stuff involving replacement algorithms
http://www.garlic.com/~lynn/subtopic.html#wsclock

i then got to got another opportunity when I rewrote the i/o subsystem for the disk engineering labs. they were doing dedicated testing of individual development devices in scheduled "stand alone" environment ... they had found that attempting to do testing in an operating system ... that systems like MVS (at the time) had a MTBF of 15 minutes (with a single development device). I undertook to rewrite the whole infrastructure so that multiple concurrent development devices could be tested "on demand" in an operating system environment. misc. posts mentioning work in the disk engineering labs
http://www.garlic.com/~lynn/subtopic.html#disk

A way to speed up level 1 caches

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Tue, 27 Feb 2007 13:21:40 -0700
Morten Reistad <first@last.name> writes:
For some reason, multprocessor machines have always seemed a lot more responsive than what benchmarks show. The segmented cache may be a reason, one cache per 1/2 processor streams, and processes (including kernel) are usually somewhat processor-sticky at least in the short term.

This should give a similar effect. The OS not only has a cache for itself, there is a whole processor attached.

4way and over can also schedule dedicated hardware to interrupts, file system, network and user space. This may explain the responsiveness.


early on as an undergraduate in the 60s ... i found that responsiveness could be improved if you could do preemption at nearly zero cost (and make reasonable judgment about what should be used for preemption) ... all part of dynamic adaptive scheduling (periodically also referred to as fair share scheduling since the default scheduling policy was fair share). this started before cache machines ... which changes the trade-off guidelines on what would represent nearly zero cost for preemption. in any case, things related to preemption decisions (and responsiveness) have since gotten a lot more complex.

for responsiveness in the multiprocessor scenario, it may be that there are a number of processors that are either idle (and no preemption logic gets involved) or doing something that easily meets the preemption criteria.

in an earlier post ... i mentioned changing kernel dispatcher to running application disabled for asynchronous i/o interrupts (as mechanism for improving cache hit ratio) ... delaying operations which might be associated with responsiveness. However, the delay might actually result in processing several interrupts at one time ... the cache miss (and cache turn-over) occurring on the first interrupt and then being able to immediately process a number of interrupts that were also pending at the same time.

in previous post,
http://www.garlic.com/~lynn/2007e.html#21 A way to speed up level 1 caches

i mentioned that a lot of my resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

had included a lot of restructuring code ... not directly associated with resource management ... a large part of the restructuring had to do with fixing a whole class of problems related to correct serialization ... however, there was another class of kernel restructuring that was also related to multiprocessing support.

for total topic drift ... the unbundling announcement of 23jun69 started change-over from free software to charged for software.
http://www.garlic.com/~lynn/submain.html#unbundle

however, at the time, they used the excuse that kernel software should still be free/bundled (because it was required for the operation of the hardware). by the time i was to getting ready to release the resource manager ... that attitude was starting to change ... then my resource manager got selected to be the guinea pig for kernel software charging. I got to spend several months on and off with the legal and business people about kernel software policies. That particular pass was if it was kernel software directly related to hardware operation (like device drivers) ... then it would still be free; but other kernel code (like resource management algorithms) could be priced. So the resource manager went out as a charged-for kernel addon.

A problem came in the next release when they wanted to ship multiprocessor support. Multiprocessor support was directly hardware related ... so should be free. However, it was dependent on a bunch of restructuring code that was already being shipped in the charged for resource manager. The problem was resolved by moving a bunch of code out of the (charged for) resource manager into the "free" kernel.

In any case, there was some slight of hand coding tricks that resulted in multiprocessor dispatching being "processor-sticky" (attempting some preservation/re-use of cache contents) ... w/o needing explicit specification of processor affinity.

Securing financial transactions a high priority for 2007

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Tue, 27 Feb 2007 17:48:19 -0700
Morten Reistad <first@last.name> writes:
Actually, the current "solutions" are part of the problem, not part of the solution.

ref:
http://www.garlic.com/~lynn/2007e.html#20 Securing financial transactions a high priority for 2007

for whatever reason, i've found that creating correct security systems is very much like creating almost any other kind of correct systems ... in fact when we were doing the ha/cmp product ... security faults was viewed as just another kind of bug
http://www.garlic.com/~lynn/subtopic.html#hacmp

various recent posts referring to various kinds of correct operation
http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance?
http://www.garlic.com/~lynn/2007e.html#14 Attractive Alternatives to Mainframes
http://www.garlic.com/~lynn/2007e.html#16 Attractive Alternatives to Mainframes
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#21 A way to speed up level 1 caches
http://www.garlic.com/~lynn/2007e.html#22 A way to speed up level 1 caches

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Wed, 28 Feb 2007 08:53:01 -0700
jmfbahciv writes:
Banks are undergoing a similar thing that happened when the S&Ls were turned loose with no minder.

long winded discussion of the S&L situation (as well as several other issues related to financial institutions)
http://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

part of the issue was that many of the institutions didn't have people that were very sophisticated in dealing with various kinds of financial instruments ... i.e. they were possibly people that did the same thing today as they did yesterday ... and could maintain the status quo (as long as things stayed relative stable and constant). When there were significant shifts (like when the reserve requirements were cut in half from eight percent to four percent, freeing up a lot of assets that required something to be done) ... they found themselves in unfamiliar waters.

in the case of magstripe devices ... they are basically something you have authentication. On the front there were various countermeasures developed for counterfeit cards ... but that required people at point-of-sale to actually pay some attention. This was a relatively difficult burden. The "magstripe" on the back became the "real" representation of the device. However, it was static data and attacks quickly appeared that could create duplicate counterfeit magstripes ... either thru active recording of the magstripe (skimming) and/or harvesting critical information stored in repositories (data breaches and security breaches).

many of the chips out there basically have duplicated the magstripe information ... and claimed resistance to attacks that would attempt to extract the duplicated magstripe information stored in the chip. however, they were still vulnerable to skimming attacks that evesdropped and recorded the information during a standard transaction (exploits that started to appear for magstripe by at least the late 80s) ... and/or attackers that obtained valid point-of-sale terminal and convinced the chip that a valid transaction was being performed.

now, in a standard two-factor magstripe transaction ... a pin is also required ... which is transmitted back to the authorizing institution for validation. normally multi-factor authentication is considered more secure because there is implicit assumption that the different authentication factors are subject to different vulnerabilities (modulo people writing their PIN on the card).

in the chip yes card scenario
http://www.garlic.com/~lynn/subintegrity.html#yescard

a PIN is also required and therefor is considered a more secure multi-factor authentication operation. however, the point-of-sale terminal first authenticates a chipcard by validating the information provided by the chip (essentially an enhanced duplicate of the magstripe information). after the terminal has "validated" the chip, it then asks the chip if the entered PIN was correct.

The yes card reference comes from a counterfeit card responding YES to all questions from the point-of-sale terminal. The (ersatz magstripe) information to create a counterfeit yes card is skimmed and/or harvested from some previous transaction (either a normal valid transaction or an interchange that a normal card has with a point-of-sale terminal in the attacker's possession).

A counterfeit yes card has the valid information installed ... which it then can used for a replay attack against a point-of-sale terminal. Multi-factor authentication is subverted since a yes card will always respond YES to the question of whether a valid pin was entered (regardless of what was actually entered). As a result, an attacker isn't required to know the original, valid PIN.

Furthermore, for the standard point-of-sale terminal deployment for chipcards, after the chip authentication has been performed and the chip has responded YES to the question about whether the PIN is correct, then the POS terminal asks the chip if the transaction is to be done offline (to which a yes card answers YES), and if it is to be an offline transaction, the terminal then asks the chip if the transaction is within the account's credit limit (to which a yes card always answers YES).

The counterfeit yes card response to the offline transaction question avoids the countermeasures that flag an account when a card is reported lost/stolen (since it isn't an online transaction, the terminal doesn't find out until much later whether the account number is still valid or not). Then the counterfeit yes card response to whether the transaction is within the account's credit limit can allow the attacker to perform an unlimited amount of fraud.

The counterfeit yes card starts out basically as a replay attack on the point-of-sale terminal (similar to counterfeit magstripe replay attack) ... but then gets much worse since it manages to bypass multi-factor authentication provisions (valid PIN) and countermeasures to lost/stolen card reporting (by bypassing check on whether account has been flagged).

In various referenced press announcements about yes card exploits,
http://www.garlic.com/~lynn/subintegrity.html#yescard

there are references to yes card being an SDA vulnerability (i.e. the authentication information is static data and therefor is subject to evesdropping/harvesting and replay attacks). The suggested countermeasure is "DDA" or switching to dynamic data authentication (things like digital signatures which can be unique/change for each use).

However, it is possible that even with DDA, there still may be a yes card man-in-the-middle attack.
http://www.garlic.com/~lynn/subintegrity.html#mitm

A lost/stolen valid card (and/or a valid card obtained under false pretenses) is paired with a MITM yes card. The MITM yes card transparently passes the initial chip authentication chatter between the terminal and the real card. After that initial interaction, the MITM yes card handles all further communication (i.e. whether correct PIN was entered, whether it should be an offline transaction, and whether the transaction is within the account's credit limit). Again multi-factor authentication has been negated and the flagging of reported accounts countermeasure has been bypassed (by forcing offline transaction and whether/or not the transaction is within the accounts credit limit).

A way to speed up level 1 caches

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A way to speed up level 1 caches
Newsgroups: comp.arch
Date: Wed, 28 Feb 2007 07:43:30 -0700
Jan Vorbr├╝ggen <jvorbrueggen@not-mediasec.de> writes:
Quite. By chance I re-read yesterday a 2001 presentation on the VMS support for multiprocessor (and NUMA) systems. They added the capabilities to direct all interrupts to one processor, have the lock manager run on only one processor (which in addition to the cache effects allows it to run in a mode where much less (spin)locking of data strcutures is required) and to ensure that the driver code for certain devices always executes on a given processor. You then use performance measurement to determine whether your dedicated CPU(s) still have some time left to do other work.

What they didn't say is what you're supposed to do when your interrupt or lock manager load exceeds the capability of one processor.


post of old announcement of VMS adding support for "symmetric" multiprocessing (as opposed to asymmetric multiprocessing ... where interrupts, i/o and other things are possibly only processed on a single processor)
http://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days?

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Wed, 28 Feb 2007 09:26:02 -0700
Eric Sosman <esosman@acm-dot-org.invalid> writes:
Sloppiness in handling records arises, I think, because the records are of different levels of importance to the various parties. You feel that no effort should be spared to protect your credit card numbers and so on from misuse, but the merchant who's processing your order just wants to get the transaction over with as quickly and inexpensively as possible. Both views are reasonable; it's all a matter of whose shoes you wear.

oft referenced, old posting about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

basically the value of the information to the attacker is significantly larger than the value (and/or resources available) to the defender. the resources available to the defender is basically proportional to the profit from the transaction. some fraction of the profit from each transaction is available to the defender to protect the information. the value of the information to the attacker is basically proportional to the credit limit for each account. As a result, the attacker can almost always outspend on the attack than any resources that the defender can afford.

the issue is further compounded by the fact that there are possibly dozen of business processes that require a merchant's ready access to past transaction information. the current environment requires previous transaction information to be kept confidential and never divulged as countermeasures to attackers using the information for future fraudulent transactions. at the same time, standard business processes require that the information be readily available. The diametrically conflicting requirements has led to my periodic comments that even if the planet was buried miles deep under encryption, it still wouldn't prevent account/transaction information leakage.

in the mid-90s the x9a10 financial standards working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the result was the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

part of the x9.59 standard was eliminating the usefulness of past transaction information for performing future fraudulent transactions. x9.59 didn't do anything about eliminating skimming and evesdropping attacks on valid transactions or do anything about eliminating data breaches and security breaches on repositories of previous transactions. what x9.59 financial standard did was it eliminated attackers being able to use the information from previous transactions for performing future fraudulent transactions (replay attacks)

x9.59 standard didn't do anything about eliminating such attacks. however, x9.59 did drastically reduced the risk of such attacks by eliminating attackers being able to use the information for performing fraudulent transactions (i.e. most of the value to the attacker was eliminated).

x9.59 is resistant to replay attacks ... mentioned in previous post:
http://www.garlic.com/~lynn/2007e.html#24 Securing financial transactions a high priority for 2007

since it uses "dynamic data authentication"

X9.59 is also resistant to man-in-the-middle attacks
http://www.garlic.com/~lynn/subintegrity.html

... mentioned in previous post
http://www.garlic.com/~lynn/2007e.html#24 Securing financial transactions a high priority for 2007

since rather than performing the authentication separate from the transaction ... authentication is part of every transaction. Having authentication as an operation separate from the transaction ... in effect leaves the transaction "naked" and vulnerable.

past posts/comments about being unable to prevent account/transaction information leakage ... even if the planet was buried under miles of encryption
http://www.garlic.com/~lynn/aadsm22.htm#2 GP4.3 - Growth and Fraud - Case #3 - Phishing
http://www.garlic.com/~lynn/aadsm22.htm#36 Unforgeable Blinded Credentials
http://www.garlic.com/~lynn/aadsm24.htm#38 Interesting bit of a quote
http://www.garlic.com/~lynn/aadsm24.htm#48 more on FBI plans new Net-tapping push
http://www.garlic.com/~lynn/aadsm25.htm#13 Sarbanes-Oxley is what you get when you don't do FC
http://www.garlic.com/~lynn/aadsm26.htm#8 What is the point of encrypting information that is publicly visible?
http://www.garlic.com/~lynn/2005u.html#3 PGP Lame question
http://www.garlic.com/~lynn/2005v.html#2 ABN Tape - Found
http://www.garlic.com/~lynn/2006e.html#26 Debit Cards HACKED now
http://www.garlic.com/~lynn/2006h.html#15 Security
http://www.garlic.com/~lynn/2006o.html#37 the personal data theft pandemic continues
http://www.garlic.com/~lynn/2006p.html#8 SSL, Apache 2 and RSA key sizes
http://www.garlic.com/~lynn/2006t.html#40 Encryption and authentication
http://www.garlic.com/~lynn/2006u.html#43 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#2 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006y.html#25 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007b.html#8 Special characters in passwords was Re: RACF - Password rules
http://www.garlic.com/~lynn/2007b.html#20 How many 36-bit Unix ports in the old days?
http://www.garlic.com/~lynn/2007b.html#60 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#10 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#33 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#53 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#34 Mixed Case Password on z/OS 1.7 and ACF 2 Version 8

...

past posts/comments raising the issue of the vulnerability of "naked" transactions:
http://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
http://www.garlic.com/~lynn/aadsm24.htm#7 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#8 Microsoft - will they bungle the security game?
http://www.garlic.com/~lynn/aadsm24.htm#9 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#10 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#12 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#14 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#22 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#26 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#27 DDA cards may address the UK Chip&Pin woes
http://www.garlic.com/~lynn/aadsm24.htm#30 DDA cards may address the UK Chip&Pin woes
http://www.garlic.com/~lynn/aadsm24.htm#31 DDA cards may address the UK Chip&Pin woes
http://www.garlic.com/~lynn/aadsm24.htm#38 Interesting bit of a quote
http://www.garlic.com/~lynn/aadsm24.htm#41 Naked Payments IV - let's all go naked
http://www.garlic.com/~lynn/aadsm24.htm#42 Naked Payments II - uncovering alternates, merchants v. issuers, Brits bungle the risk, and just what are MBAs good for?
http://www.garlic.com/~lynn/aadsm24.htm#46 More Brittle Security -- Agriculture
http://www.garlic.com/~lynn/aadsm25.htm#20 Identity v. anonymity -- that is not the question
http://www.garlic.com/~lynn/aadsm25.htm#25 RSA SecurID SID800 Token vulnerable by design
http://www.garlic.com/~lynn/aadsm25.htm#28 WESII - Programme - Economics of Securing the Information Infrastructure
http://www.garlic.com/~lynn/aadsm26.htm#6 Citibank e-mail looks phishy
http://www.garlic.com/~lynn/aadsm26.htm#13 Who has a Core Competency in Security?
http://www.garlic.com/~lynn/2006m.html#15 OpenSSL Hacks
http://www.garlic.com/~lynn/2006m.html#24 OT - J B Hunt
http://www.garlic.com/~lynn/2006o.html#35 the personal data theft pandemic continues
http://www.garlic.com/~lynn/2006o.html#37 the personal data theft pandemic continues
http://www.garlic.com/~lynn/2006o.html#40 the personal data theft pandemic continues
http://www.garlic.com/~lynn/2006t.html#40 Encryption and authentication
http://www.garlic.com/~lynn/2006u.html#43 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006y.html#8 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2006y.html#25 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007b.html#60 Securing financial transactions a high priority for 2007

IBM S/360 series operating systems history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM S/360 series operating systems history
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 01 Mar 2007 09:52:17 -0700
Shmuel Metz , Seymour J. wrote:
My recollection is that the original virtual storage announcement for S/370 already used the term MVS for OS/VS2 R2. However, you will still see remnants in the code of the original names, AOS/1 and AOS/2.

some number of customers had been doing stuff to make MVT run better in a cp67 (on 360/67) virtual machine ... including various things related to virtual memory.

there was a period when we were making regular trips from cambridge to pok to participate in 370 architecture meetings ... especially related to virtual machine and virtual memory operation ... and would periodically knock around 706 machine room in the evenings where AOS prototype was being built ... crafting virtual memory on the side of MVT for what was to become SVS.

Ludlow(?, I'm pretty sure i remember his name) was doing a lot of the work. Part of the effort involved taking the (virtual to real) channel program translator/builder from CP67 (CCWTRANS) and cobbling it into the side of MVT (i.e. a lot of AOS ... instead of running MVT under cp67 virtual machine virtual memory ... was hacking various pieces of cp67 virtual memory support into the side of a MVT kernel ... especially the channel program translator ... which was some amount of fairly complicated code, involving interpreting the virtual channel program, making a shadow, finding all the "data" virtual pages, fixing them in core ... and using the "real" addresses).

recent thread discussing patching CCWTRANS to handle ISAM and other self-modifying channel programs i.e. CCWTRANS built a "shadow" of the "applications" virtual channel program ... with "real" addresses ... and ran the "shadow" channel program ... any dynamic modifications that an application did to the "virtual" channel program wouldn't actually involve the channel program being executed.
http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#17 A way to speed up level 1 caches
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction

A slightly different issue I had was with the POK performance modeling group that was trying to come up with virtual page replacement algorithms. They eventually decided on including some optimization feature that I claimed that would totally distort "least recently used" assumptions related to page replacement. They did it anyway ... and it wasn't until well into MVS release cycle in the late 70s ... that it dawned on them that they were selecting high-use LINKPACK shared executable pages for replacement before private, lower-used application data pages.

lots of past posts related to page replacement algorithms
http://www.garlic.com/~lynn/subtopic.html#wsclock
as well as old email on the same subject
http://www.garlic.com/~lynn/lhwemail.html#globallru

past posts in this thread:
http://www.garlic.com/~lynn/2007d.html#48 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#65 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#69 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems history

misc. pasts posts taking ludlow's name in vain:
http://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
http://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
http://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
http://www.garlic.com/~lynn/2001l.html#36 History
http://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
http://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
http://www.garlic.com/~lynn/2002p.html#49 Linux paging
http://www.garlic.com/~lynn/2002p.html#51 Linux paging
http://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
http://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
http://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 01 Mar 2007 12:14:06 -0700
krw <krw@att.bizzzz> writes:
No, they're coded by people.

basically check readers/sorters ... would read the micr at the bottom of the check with some coding provided by humans as to the rest of the information on the check.

big part of check readers/sorters was to pull the routing code off the micr ... so the piece of paper could get sorted into bins that would eventually involve physical transportation to the correct financial institution.

it wasn't unheard of to get a check or two every month where there had been an extra sticky strip pasted to the bottom of the physical check with (new) micr routing code (selected by human operator) .... because the micr on the paper check wasn't readable (remember "do not fold, spindle or mutilate"?) ... peoples' handwriting was well beyond the capability of such electronic readers (which could have problems with the well structured, preprinted micr coding).

the lore has been that a big part of federal express business started with overnight transportation of paper checks around the US ... for clearing from the federal reserve. A huge bank of check sorters in Nashville ... next to the airport ... planes from all over the country arriving in the middle of the night ... the paper checks being sorted and then piled back onto the planes for delivery in the morning.

check21 is designed to do away with all the transport of physical paper ... the check being imaged at point-of-sale ... with the paper being returned to the person. it is still necessary to electronically read the micr routing code from the image for correct (electronic) routing.

a few past posts/threads mentioning check sorters
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#155 checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/2002.html#18 Infiniband's impact was Re: Intel's 64-bit strategy
http://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 01 Mar 2007 12:43:08 -0700
Identity Fraud: ID Theft Victims, Losses Take Welcome Nosedive
http://www.banktechnews.com/article.html?id=20070226T5LTLE8K

there has been some effort by FTC and others to differentiate types of ID Theft ... at least into 1) Identity Fraud (where identity information is used to do various things like opening new accounts) and 2) Account Fraud (fraudulent transactions against existing accounts).

The above article refers to Identity Theft loses dropping to $49.3billion in 2006 from $55.7billion in 2005.

However, it goes on to say that most of that improvement comes from better processing of requests to open new accounts.

while:

... "existing accounts have the highest average fraud" ... $7,560 and "the average consumer cost from fraud rose sharply, from $431 to $535".

so, lumping all kinds of fraud together, there was an overall reduction of about 10percent (from $55.7billion to $49.3billion) ... while, at the same time, the avg. consumer fraud costs rose sharply by approx. 1/4th from $431 to $535.

Health Care

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Health Care
Newsgroups: alt.folklore.computers
Date: Thu, 01 Mar 2007 16:48:43 -0700
the US comptroller (appointed in the mid-90s for 15yr term) continues tirade against the prescription drug bill ... as the most financially irresponsible law in 40yrs ... which may have already bankrupt the US Most recent comments are to broadcast Sunday night on CBS 60mins.

previous posts/threads in this n.g. mentioning US Comptroller comments from last year on the same subject.
http://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
http://www.garlic.com/~lynn/2006o.html#61 Health Care
http://www.garlic.com/~lynn/2006p.html#17 Health Care
http://www.garlic.com/~lynn/2006t.html#26 Universal constants

Quote from comp.object

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Quote from comp.object
Newsgroups: comp.databases.theory
Date: Thu, 01 Mar 2007 17:37:44 -0700
Sampo Syreeni <decoy@iki.fi> writes:
In the end such an organization would probably be faster than an IMS database because every overhead that could be cut would have been, yet higher level operations like multitable joins which allow for cost amortization would have been properly declared in relational syntax, and fully exploited. Such savings are not possible under the interface offered by IMS, evenwhile they're the lifeblood of the RM.

as i've mentioned before ... the exchange between the IMS group and the System/R group in the late 70s ... basically was that System/R had drastically increased system overhead while significantly reducing manual/human maintenance effort.

The trade-off was that IMS carried direct pointers as part of the database "data" ... were exposed to the database applications and had to be managed and dealt with. The relational System/R implementation had abstraction that did away with the exposed record pointers and therefor eliminated significant amount of human/manual/administrative effort dealing with the exposed record pointers. The System/R "costs" were an index implemented under the covers (below the abstraction interface) that significantly drove up the number of disk I/O operations needed to reach the desired information and also typically doubled the amount of physical disk space (for typical target applications of the period) ... but eliminated the manual costs of dealing with exposed record pointers.

The change over in the 80s was general increase in people costs (driving up the human/manual/administrative costs) and general decrease in computing hardware and resource costs ... changing human/hardware cost tradeoff decisions.

There was also significant increase in amount of real storage for typical configurations ... allowing much of the relational implementation index structure to be cached, mitigating the number of actual physical disk I/Os involved in dealing with the index structure, and significant reduction in disk space cost/bit muting the issue about doubling physical disk space.

The next human constraint/bottleneck appears to be the intellectual effort related to "normalization". Some past studies have indicated that this is significant enuf that some large organizations were found with six thousand different RDBMS deployments ... where over 90percent of the information was common. The evolution appears to have been that a RDBMS (potentially because of the normalization constraints) is relatively specific mission oriented (potentially a number of different applications, but still focused on a specific business mission). A some point, adding a somewhat different mission, it became simpler to take a subset of the original data and add just the additional items for the different mission. This repeatedly happening a number of times over a decade or more ... and the organization finds itself with 6000 very similar but still different deployments.

There are still some number of significantly large business operations which continue to find they aren't able to justify the move from IMS type infrastructures to RDBMS operation. For the most part the value of the operation easily justifies both the hardware and people costs ... and the aggregate data may be so large ... and access patterns are sparse enough that there is not high probability that significant amounts of (RDBMS) index would already be cached, to eliminate needing several disk operations to arrive at the desired record. In some cases the issue may be that they have elapsed time constraints (like overnight batch windows) where elapsed processing time and number of (serially ordered) disk I/Os represents a significant consideration.

Periodically there are statements that there may still be more aggregate data in these types of repositories than aggregate data existing in RDBMS repositories.

misc. past posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

I/O in Emulated Mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: I/O in Emulated Mainframes
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 02 Mar 2007 07:53:24 -0700
oft repeated story about mainframe emulation and I/O with regard to 370/158 and integrated channels; most recent telling
http://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction

basically the 158 microcode engine had both microcode for emulating 370 and also for emulating channels. in the transition to 303x machines ... the (158) integrated channel microcode was split-off into a dedicated box called the channel director. A 3031 was basically two 158 microcode engines, one with a dedicated engine for running integrated channel microcode and the other engine dedicated to running the 370 emulation microcode (potentially could be considered a two-processor SMP ... which would make a 3031 SMP ... actually four processor system ... since each emulated 370 engine had its corresponding channel director). A 3032 was a repackaged 370/168 with one or more (158 microcode engine integrated channel) "channel directors". A 3033 was 168 wiring diagram mapped to fast chip technology ... along with one or more (158 microcode engine integrated channel) "channel directors" ... i.e. 158 integrated channels supported 6 channels ... to get a 16 channel configuration, you needed three channel directors. A two processor 3033 SMP ... was actually typically an eight processor system ... two 3033 processor with each processor (typically) having three channel directors..

note that splitting off the integrated channel microcode into dedicated processor made the 3031 benchmarks better than 370/158 (even tho the microprocessor engines were the same) ... with 3031 benchmarking almost as fast as 4341 ... the above URL reference also contains results of RAIN benchmark on 158, 3031 and early 4341 engineering machine (ran about 10-15 percent slower than production machines shipped to customers).

similarly, 370 115/125 had a memory bus that provided 9 "slots" for up to nine processors. A 115 had a microcode engine running dedicated 370 microcode emulation ... and up to eight other (identical) processors running other microcode loads (communication controller microcode load, disk controller microcode load, etc). A 125 was identical to 115 except the processor engine running 370 microcode emulation was 50percent faster than the other processor engines. recent posts with discussion of 370 115/125
http://www.garlic.com/~lynn/2007d.html#71 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems history

lots of past posts referring to 303x channel director being dedicated 158 microcode engine with "integrated channel microcode" from 370/158
http://www.garlic.com/~lynn/97.html#20 Why Mainframes?
http://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
http://www.garlic.com/~lynn/99.html#7 IBM S/360
http://www.garlic.com/~lynn/99.html#176 S/360 history
http://www.garlic.com/~lynn/99.html#187 Merced Processor Support at it again
http://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
http://www.garlic.com/~lynn/2000c.html#69 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#21 S/360 development burnout?
http://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#3 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#6 OS/360 (was LINUS for S/390)
http://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
http://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
http://www.garlic.com/~lynn/2001j.html#14 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
http://www.garlic.com/~lynn/2001l.html#24 mainframe question
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2002.html#48 Microcode?
http://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
http://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
http://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#21 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#23 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
http://www.garlic.com/~lynn/2003.html#39 Flex Question
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
http://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
http://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2004.html#8 virtual-machine theory
http://www.garlic.com/~lynn/2004.html#9 Dyadic
http://www.garlic.com/~lynn/2004.html#10 Dyadic
http://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
http://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
http://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
http://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005b.html#26 CAS and LL/SC
http://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005e.html#59 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#41 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005h.html#40 Software for IBM 360/30
http://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
http://www.garlic.com/~lynn/2006m.html#27 Old Hashing Routine
http://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
http://www.garlic.com/~lynn/2006o.html#27 oops
http://www.garlic.com/~lynn/2006q.html#31 VAXen with switchmode power supplies?
http://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
http://www.garlic.com/~lynn/2007b.html#18 How many 36-bit Unix ports in the old days?
http://www.garlic.com/~lynn/2007d.html#21 How many 36-bit Unix ports in the old days?

IBM S/360 series operating systems history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM S/360 series operating systems history
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 02 Mar 2007 08:29:28 -0700
Phil Payne wrote:
They had Memorex Double Density 3350s with IDI - "Intelligent Dual Interface". Was ever anything so inappropriately named? A status bus parity check - a common occurence - caused all IDI-linked controllers to forget all owed interrupts. Total system hang. SVS had a MIH, but its channel redrive was - IMO - incorrect. I can't remember after a quarter of a century, but it did a Clear IO when it should have done a Clear Channel or vice versa. I zapped the opcode

for other topic drift ... san jose had done a 3350 (prior to 3380) with twice the number of cylinders ... but they couldn't get MVS to provide the device support ... so they shipped it as two emulated regular 3350s ... which didn't see a lot of uptake, partially because the two independent optimized seek queues would be contending for single arm (resulting in relatively random arm motion).

this was an ongoing problem. MVS also wouldn't do FBA (fixed-block-architecture) device support. It was even offered to provide them with the fully tested code ... and the reply was there would still be a twenty some million change bill (documentation, training, etc). Needed to demonstrate real incremental ROI for the (i.e. real additional new sales as opposed to customers buying FBA in-lieu of CKD).

lots of past posts about hacking around in the disk engineering and test labs (bldgs 14&15)
http://www.garlic.com/~lynn/subtopic.html#disk

Date: 09/07/82 12:16:54
From: wheeler
...
IBM double density (double the number of tracks) are here also. The engineers have been fighting with the OS people (completely unsuccessfully) to support the box in native mode .. i.e. one device with twice the number of cylinders as a 3350. OS data management people would have nothing of it. Several engineers who had MVT experience said that they could go in and do it easily just by defining a new device type and updating a couple of tables (almost as trivial as what it takes for VM). OS data management replied that things have completely changed since then, implying that they might not even know all the routines that may have tables now. Result is that the engineers have been forced into simulating two 3350 drives on a single double density 3350 because the OS crowd is completely incapable of getting their act together. As a result any performance optimization techniques are going to be blown almost completely out of the window (in some ways worse than effect of multi-track search). Not only is two device simulation going to completely lay to waste any ordered seek queuing algorithms (as bad as what happens in a multiple CPU, shared DASD situation) ... but VM is stuck with the design also.

Based on the current record so far ... any investigation into MVS support of FBA is going to be little more than another throw-away task force report w/o any productive results.


... snip ...top of post, old email index

for FBA/CKD topic drift ... recent post about heavy performance penalty multi-track CKD extracted long after the memory/io trade-off had changed
http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction

and lots of past posts on the CKD performance trade-off subject ... using excess I/O capacity in the 60s to compensate for scarce real-storage ... but by the mid-70s, configurations were changing from real-storage constrained to I/O constrained ... making the CKD trade-off totally the wrong thing.
http://www.garlic.com/~lynn/submain.html#dasd

other old email about early difficulty with MVS RAS for 3380 ... which got me into loads of hot water with the MVS RAS manager ... not because of the tests ... but for having sent an email mentioning the results of the tests.
http://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"

when I had originally started playing around in the disk engineering labs ... the "testcells" (i.e. development devices) were being scheduled for stand-alone testing on dedicated processors. They had attempted to run under MVS in an operating system environment ... but experienced 15 min MTBF for MVS (i.e. MVS was crashing or hanging because of errors and/or incorrect operations of the devices under development). I undertook to completely redo the i/o subsystem to make it bullet-proof ... allowing multiple concurrent testcells to be tested "on-demand" ... rather than having to resort to dedicated, stand-alone scheduled testing time.

Is computer history taught now?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taught now?
Newsgroups: alt.folklore.computers
Date: Fri, 02 Mar 2007 13:26:29 -0700
krw <krw@att.bizzzz> writes:
The most interesting traffic pattern I've seen is the "snake" on Rt. 101 in Silly Valley. The traffic looks like rats being digested by a snake; accelerate to highway speed then slam on the brakes and stop, repeat.

we've seen that going up the hill going south on hiway 101 ... just south of the 85/101 intersection ... we've referred to as accordion effect ... and there are frequent rear-end collisions when/where ever the accordion effect kicks in. post post mentioning accordion (brake-light) effect
http://www.garlic.com/~lynn/2004c.html#17 If there had been no MS-DOS
http://www.garlic.com/~lynn/2005p.html#4 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005p.html#7 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2006p.html#5 sorting
http://www.garlic.com/~lynn/2006p.html#12 sorting

also past post about lobby in coyote valley forcing the "new" 101 construction to go from six lanes to four lanes thru coyote valley (and then resume six lanes when it got to cochran) ... later an expensive construction project had to go thru and retrofit six lanes to the coyote valley section
http://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 02 Mar 2007 15:00:22 -0700
Clark Morris wrote:
So the jackasses will have cost the company far more than the 20 million dollars by their opposition. Does anyone really think that 54 gigabytes per volume is going to be other than totally inadequate in the next ten years? Laptops now have 100 gigabytes and up on a single drive. FBA will require major changes to spool management but we might be able to get away from the one track IPL text. I can see various FBA types such as: ones with file systems with all directory/file name/member name information in Unicode, ones with just z/FS or successor file systems and looking like true Unix volumes, and ones that are structured to be only VSAM/PDSE related volumes. There might be other variants once the bottleneck of CKD is broken. MVS might even be able to recognize a DVD.

re:
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history

actually I used similar argument as part of the original justification ... projecting enormous total life-cycle cost savings by moving to FBA ... in addition to a whole variety of performance improvements that would come as part of moving to FBA.

lots of past posts mentioning the whole FBA, CKD, etc period
http://www.garlic.com/~lynn/submain.html#dasd

Quote from comp.object

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Quote from comp.object
Newsgroups: comp.databases.theory
Date: Fri, 02 Mar 2007 14:44:36 -0700
"JOG" <jog@cs.nott.ac.uk> writes:
What exchange was this? Are you referring to informal discussion or groundswell opinion, or is there some form of documentation that I might cite?

re:
http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object

nope ... just meetings between people in the two groups ... in fact, some amount of consulting by people in SJR to the IMS organization. old email with some reference (Jim wanting me to take his place since he was leaving for Tandem):
http://www.garlic.com/~lynn/2007.html#email801016
and
http://www.garlic.com/~lynn/2007.html#email801006

in this post
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"

other reference to the above
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing

misc. past posts mentioning System/R, SQL/DS, etc
http://www.garlic.com/~lynn/submain.html#systemr

Quote from comp.object

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Quote from comp.object
Newsgroups: comp.databases.theory
Date: Sat, 03 Mar 2007 08:02:45 -0700
paul c <toledobythesea@oohay.ac> writes:
Don't know about Pick, but that wasn't true of IMS/DC or CICS/DL1.

re:
http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object
http://www.garlic.com/~lynn/2007e.html#36 Quote from comp.object

IMS started out as much more of database manager ... CICS started out as transaction manager/monitor. CICS applications started out talking BDAM (an indexed file access mechanism) with CICS. In BDAM scenario, there could be CICS application accessing the BDAM data ... and/or there could be independent batch applications directly access (the same) BDAM files. CICS applications running in the same space as each other or CICS monitor could stomp on each other.

However, CICS applications and BDAM could be super efficient. Big overhead in OS/360 operating system environment was scheduling/dispatching and file open/close. CICS started out as a big operating system application that acted as (sub)-monitor to provide lightweight flavors of some operating system services. It did its own lightweight dispatching/scheduling of CICS applications. It tended to open all needed (operating system) files at CICS startup ... and then handled lightweight sub-access file control for CICS applications. minor current reference:
http://publib.boulder.ibm.com/infocenter/cicsts/v3r1/topic/com.ibm.cics.ts31.doc/dfha2/dfha25c.htm

The university where I was undergraduate was selected to be one of the beta-test sites for the original CICS product ... and I remember doing some early CICS BDAM debugging. The original CICS had been developed at a customer account before being picked up for a product. It had been originally used with some specific BDAM features. The Univ. had an ONR grant to do some library automation and were attempting to configure CICS with different BDAM options ... which I got involved in debugging. misc. past posts mentioning BDAM, CICS, etc
http://www.garlic.com/~lynn/submain.html#bdam

Today you might find CICS applications still being used to drive millions of ATM machines ... or talk to tens/hundreds of millions of settop (cable tv) boxes.

IMS was/is much more of DBMS infrastructure ... with separation between (some aspects) of data management. While in original CICS, you could have CICS applications accessing BDAM files ... and totally different batch applications accessing the same BDAM files (although not concurrently) ... in IMS infrastructure ... everything is going thru IMS DBMS (where there may be some distinction between "interactive" IMS applications and "batch" IMS applications). However, in addition to CICS applications talking BDAM & VSAM ... CICS applications can talk to IMS; reference CICS talking DL1 to IMS
http://publib.boulder.ibm.com/infocenter/cicsts/v3r1/topic/com.ibm.cics.ts31.doc/dfhp3/dfhp3k4.htm

wiki IMS entry ... with early history
https://en.wikipedia.org/wiki/Information_Management_System

and wiki CICS entry
https://en.wikipedia.org/wiki/CICS
and CICS history site
http://www-306.ibm.com/software/htp/cics/35/

although doesn't mention pre-product history and/or the original product beta-test sites.

for total different drift ... original relational/sql was system/r developed on vm370 at SJR ... misc. past posts
http://www.garlic.com/~lynn/submain.html#systemr

vm370 was virtual memory/machine follow-on to cp67 developed at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

some of the CTSS developers had gone to science center on 4th flr of tech sq and some other CTSS developers had gone to Multics project on 5th flr of tech sq. Multics beat system/r with product release ... recent reference in the post
http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance

Another "relational" DBMS product from that period as Nomad ... wiki reference
https://en.wikipedia.org/wiki/Nomad_software

Nomad was developed by NCSS ... a cp67 commercial service bureau spin-off with several people from the science center.
https://en.wikipedia.org/wiki/National_CSS

I had spent much of the 70s at the science center but transferred to SJR in early '77.

misc. past posts mentioning nomad, ramis and focus
http://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
http://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS

and history site here
http://www.decosta.com/Nomad/tales/history.html

RAMIS was originally from a group of people at Mathematica Products Group and made available on NCSS system. NCSS then did its own version/flavor as NOMAD. A different flavor was by one of the Mathematica folks that had gone to Tymshare on the west coast called Focus. Tymshare was another commercial time-sharing service using vm370 (cp67 follow-on) as basis (and somewhat in competition with NCSS). Lots of past posts mentioning various cp67 &/or vm370 commercial time-sharing services
http://www.garlic.com/~lynn/submain.html#timeshare

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 09:34:58 -0700
Efinnell15@ibm-main.lst wrote:
Likewise for VM/ESA 3340's, 3370's.

that should be 3310 & 3370s ... 3330, 3340, 3350 were all ckd. 3340s were removable packs that were totally enclosed including the arm access mechanism. There was 3375 which basically was (hardware) emulation of CKD on 3370 device.

part of the issue was that cp & cms ... always treated disks as logical FBA ... even going back to cp40 and cp67 implementations in the mid60s on 2311 and 2314 (CKD) ... and didn't really leverage any CKD features.

the one possible exception was internal modification originally done for the HONE system. HONE was the world-wide vm370 (originally cp67) based online service for marketing, sales, and field people. In the mid-70s the various US HONE datacenters were consolidated in northern cal (and the US HONE system started being cloned in more and more places around the world). I provided highly customized kernel for HONE operations for period of 15yrs or so (and some of my first trips outside the US was personally installing HONE clones in other parts of the world ... first one was when EMEA hdqtrs moved from US to Paris). misc. past posts mentioning hone
http://www.garlic.com/~lynn/subtopic.html#hone

the consolidation of some of the HONE vm370 datacenters provided opportunity for development of vm370 "single-system-image" support with front-end load-balancing and availability infrastructure directing branch office logon to specific processor. The mechanism utilized a special CKD sequence that performed a logical compare&swap channel program to correctly synchronize logins across all processors in the complex. The compare&swap channel program had originally been developed by the people at the Uithoorn HONE system in Europe ... for coordinating logins across all processors in loosely-coupled complex. I believe that JES2 multi-access spool then started using a similar logical compare&swap channel program for its loosely-coupled operation. This was to avoid the heavy penalty and overhead of doing full device reserve/release sequence.

Later the US hone complex was replicated first in Dallas and then a 3rd in Boulder (and could provide geographic availability to US branch offices logins across all three datacenters).

for other drift ... recent post
http://www.garlic.com/~lynn/2007e.html#16 Attractive Alternatives to Mainframes
mentioning coining disaster survivability and geographic survivability
http://www.garlic.com/~lynn/submain.html#available
when we were doing ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

For completely other topic drift ... the 370 compare&swap instruction was originally invented by charlie at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

as part of his work on fine-grain multiprocessing locking for cp67 kernel (precursor to vm370 developed at the science center). "CAS" was then chosen for the instruction ... because they are charlie's initials ... and then had to come up with instruction name to go along with his initials. a couple recent posts on effort to get compare&swap instruction into 370 architecture.

Then I also started doing a highly customized kernel for the disk development and product test labs
http://www.garlic.com/~lynn/subtopic.html#disk

related previous posts
http://www.garlic.com/~lynn/2007d.html#48 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#52 CMS (PC Operating Systems)
http://www.garlic.com/~lynn/2007d.html#65 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#69 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#32 I/O in Emulated Mainframes
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#35 FBA rant

for other topic drift:
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
http://www.garlic.com/~lynn/2002o.html#3 PLX
http://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2005t.html#50 non ECC
http://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006v.html#31 MB to Cyl Conversion

I'm trying to fill in rest of these code-names:


2301       fixed-head/track (2303 but 4 r/w heads at time)
             2303       fixed-head/track r/w 1-head (1/4th rate of 2301)
Zeus Corinth 2305-1     fixed-head/track
Zeus Athens  2305-2     fixed-head/track
2311
             2314
MARS file    2321       data-cell "washing machine"
Piccolo      3310       FBA
Merlin       3330-1
Iceberg      3330-11
Winchester   3340-35
             3340-70
3344       (3350 physical drive simulating multiple 3340s)
Madrid       3350
NFP          3370       FBA
Florence     3375       3370 supporting CKD
Coronado     3380 A04, AA4, B04
EvergreenD   3380 AD4, BD4
EvergreenE   3380 AE4, BE4
             3830       disk controller, horizontal microcode engine
Cybernet     3850       MSS (also Comanche & Oak)
Cutter       3880       disk controller, jib-prime (vertical) mcode engine
Ironwood     3880-11    (4kbyte/page block 8mbyte cache)
Sheriff      3880-13    (full track 8mbyte cache)
Sahara       3880-21    (larger cache for "11")
??           3880-23    (larger cache for "13")

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 09:54:08 -0700
Efinnell15@ibm-main.lst wrote:
Not enough caffeine...should be VM/XA

vm370 (and CMS) shipped with 3310 and 3370 support when the devices first became available in the 70s.

3310/piccolo was used by the 3081 service process (running custom programming on microprocessor) ... including "paging device" for 3081 "paged" microcode ... old post discussing SIE instruction implementation trade-offs between 3081 and 3090 (including 3081 would be "paging" part of the microcode)
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

I don't remember for sure whether FBA (3370) was used by the 3090 service processor or not. The 3090 service processor started out being a highly customized version of vm370 release 6 running on 4331 processor . Before FCS, 3090 service processor was upgraded to high customized version of vm370 release 6 running on a pair of 4361 processors (with service processor menu screens implemented in CMS's IOS3270).

previous posts
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 13:16:44 -0700
Efinnell15@ibm-main.lst wrote:
Yeah, we had both on a 4361. I don't remember all the details since it was under the purview of the VM group. Think they went out of their way to avoid 3330's and 3350's. One of the very sharp MVSer's dad worked at Santa Teresa and we usually got excellent support for all geometries. The one exception was the 3380 w/ speed matching buffer. The JES3 folks just wouldn't admit that it existed or even take it into account. So we'd ZAP in support only to find it SUP'd with next batch of PTFs.

whole e-CKD stuff somewhat came out of trying to get 3380 speed matching buffer stuff working. 3380/3880 introduced "data streaming" and 3mbyte channels raising the max channel distance (daisy-chain) from 200ft to 400ft. previously bus&tag had synchronous end-to-end handshake on every byte transferred. "data streaming" relaxed that requirement ... doubling both the typical max. data transfer from 1.5mbyte to 3mbyte and also max channel distance from 200ft to 400ft. "speed matching" attempted to retrofit 3880/3380 to 168 and 303x machines with channel running at 1.5mbyte max (w/o 3mbyte data streaming support)

part of the FBA rant was the significant pain/cost of trying to get e-CKD & speed matching working could have been totally eliminated by directly supporting FBA (at a drastically lower cost/effort).

lots of posts about doing operating system for disk development and product test labs (bldgs. 14&15) was debugging 3880 and 3380 problems.
http://www.garlic.com/~lynn/subtopic.html#disk

and in part because I was around trying to figure out what was going on to compensate for hardware incorrect/error operations in the software ... i would also periodically get pulled into playing hardware for the engineers. the excuse given roping me into playing hardware engineer was that so many of the senior engineers had departed (for one reason or another).

old posts mentioning the period:
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2003e.html#7 cp/67 35th anniversary
http://www.garlic.com/~lynn/2003e.html#9 cp/67 35th anniversary
http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
http://www.garlic.com/~lynn/2004l.html#14 Xah Lee's Unixism
http://www.garlic.com/~lynn/2006l.html#4 Google Architecture
http://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2007b.html#28 What is "command reject" trying to tell me?
including this old email reference in the above post
http://www.garlic.com/~lynn/2007b.html#email800402

The previous post that started the "FBA rant" topic drift
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history

had email that had first paragraph on speed-matching buffer edited out
http://www.garlic.com/~lynn/2007e.html#email820907

Date: 09/07/82 12:16:54
From: wheeler

STL cannot even handle what they currently have. Calypso (3880 speed matching buffer using "ECKD") is currently in the field and not working correctly. Several severity one situations. Engineers are on site at some locations trying to solve some hardware problems ... but they have commented that the software support for ECKD appears to be in even worse shape ... didn't even look like it had been tested.

IBM double density (double the number of tracks) are here also. The engineers have been fighting with the OS people (completely unsuccessfully) to support the box in native mode .. i.e. one device with twice the number of cylinders as a 3350. OS data management people would have nothing of it. Several engineers who had MVT experience said that they could go in and do it easily just by defining a new device type and updating a couple of tables (almost as trivial as what it takes for VM). OS data management replied that things have completely changed since then, implying that they might not even know all the routines that may have tables now. Result is that the engineers have been forced into simulating two 3350 drives on a single double density 3350 because the OS crowd is completely incapable of getting their act together. As a result any performance optimization techniques are going to be blown almost completely out of the window (in some ways worse than effect of multi-track search). Not only is two device simulation going to completely lay to waste any ordered seek queuing algorithms (as bad as what happens in a multiple CPU, shared DASD situation) ... but VM is stuck with the design also.

Based on the current record so far ... any investigation into MVS support of FBA is going to be little more than another throw-away task force report w/o any productive results.


... snip ...top of post, old email index

IBM S/360 series operating systems history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM S/360 series operating systems history
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 13:52:00 -0700
Anne & Lynn Wheeler wrote:
the consolidation of some of the HONE vm370 datacenters provided opportunity for development of vm370 "single-system-image" support with front-end load-balancing and availability infrastructure directing branch office logon to specific processor. The mechanism utilized a special CKD sequence that performed a logical compare&swap channel program to correctly synchronize logins across all processors in the complex. The compare&swap channel program had originally been developed by the people at the Uithoorn HONE system in Europe ... for coordinating logins across all processors in loosely-coupled complex. I believe that JES2 multi-access spool then started using a similar logical compare&swap channel program for its loosely-coupled operation. This was to avoid the heavy penalty and overhead of doing full device reserve/release sequence.

Later the US hone complex was replicated first in Dallas and then a 3rd in Boulder (and could provide geographic availability to US branch offices logins across all three datacenters).


to slightly wander back to the original thread ... the US HONE VM/370 complex in the late 70s was possibly the largest single system image operation in the world.

there were some large ACP/TPF single system image clusters deployed ... but since ACP/TPF didn't symmetrical multiprocessor support until much later ... HONE could actually do larger number of processors. Because large portion of HONE applications were APL based, HONE was an extremely computational intensive operation. Multiple "AP" multiprocessors (with only one of the processors in the configuration having I/O channels) could be configured in loosely-coupled operation. You could get eight "AP" multiprocessors in loosely-coupled configuration (16 processors total) with 3830 four-channel switch ... and two-way 3330 string-switch (i.e. each 3330 string connected to two different 3830 controllers).

For additional topic drift, a large body of internal software enhancements never made it into the product. In the early 80s there was a study done of the WATERLOO/SHARE tape and the collection of internal corporate software enhancements. The amount of source code on the WATERLOO/SHARE tape and the internal corporate software enhancements were comparable in size and both estimated to be larger than the amount of source code in the base vm/cms product.

Part of the problem (as I've previously posted) was that in the mid-70s, POK had convinced the corporation to kill the vm370 product and move all the people to POK to support MVS/XA development ... as part of the only way to get MVS/XA product out the door. Endicott managed to get something of a reprieve and save a fraction of the people from being re-assigned to supporting MVS/XA development. Somewhat as a result, there was a significant bottleneck created getting enhancements for vm/cms out-the-door ... and a extremely large body of software enhancements accumulated for the product .... both at the large number of internal corporate datacenters and also from customer datacenters.

For completely other drift ... my wife had served in the g'burg JES development group before being con'ed into going to POK to be in charge of loosely-coupled architecture. While in POK, she originated peer-coupled shared data architecture ... lots of past posts with references
http://www.garlic.com/~lynn/submain.html#shareddata

... for various reasons ... except for IMS hot-standby ... saw very little uptake until sysplex (and large reason she didn't stay in the position for long).

However, for further drift ... a little walk down DBMS memory lane from this thread from comp.databases.theory
http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object
http://www.garlic.com/~lynn/2007e.html#36 Quote from comp.object
http://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object

which, in turn wandered into referencing this wiki page about IMS
https://en.wikipedia.org/wiki/Information_Management_System

mentioning Vern Watts as the granddad of IMS still being around. Vern was one of the people my wife work with regarding IMS hot-standby.
The Tale of Vern Watts

however, this other old email reference ...
http://www.garlic.com/~lynn/2007.html#email801016

mentioning Jim passing off some number of things to me as part of his leaving for Tandem ... includes mention of joint lunch with IMS developers and telling them to call me after Jim's departure. I can't remember for sure if Vern was at that lunch meeting or not. I have some recollection of a German (Hans something?) ... who later left to work for Amdahl.

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 14:28:02 -0700
Efinnell15@ibm-main.lst wrote:
Guess the next iteration was the 3380 D/E's and 3880-11's and 13's. Think the 11's made it to production but the 13's never did. Soon after(mid 80's)I left for greener pastures and walked into Amdahl hell with the 6880 ESP...and STK SSDs. Whew, I lived to tell about!

-11 and -13 were 8mbyte 3880 disk controller caches. -11/ironwood was 4kbyte record/page cache and -13/sheriff was full-track cache ... code name table in previous post
http://www.garlic.com/~lynn/2007e.html#38 FBA rant

the -21/-23 later increased the -11/-23 8mbyte cache size to 32mbytes.

i got involved in some of the issues about how to optimize use of disk caches ... what they could and couldn't do. here is recent post discussing the -13/-23 track cache on how to interpret some of the original claims.
http://www.garlic.com/~lynn/2007e.html#10 A way to speed up level 1 caches

and a couple recent posts discussing/mentioning ironwood
http://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller cache
http://www.garlic.com/~lynn/2007c.html#12 Special characters in passwords was Re: RACF - Password rules

there is reference to "duplicate" and "no duplicate" strategies ... given the size of 8mbyte ironwood "page" cache ... it wasn't impossible that when using ironwood for primarily paging activity ... that every page located in an ironwood cache (in typical configurations) was also duplicated in real processor memory (i.e. the aggregate size of processor real memories and typical aggregate ironwood caches were comparable). Given that the page was in real memory ... there would be no reason that the duplicate page in the ironwood cache would ever be used.

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 03 Mar 2007 19:10:50 -0700
Ray Mullins wrote:
I remember earlier 3090 models did have 3370's for the service processor; this was 1987-1988 or thereabouts.

Our IBM FE said that the designers used it because someone said to use up all the 3370s that weren't selling. Not that I believed that 100%...


by 1988, 3370 were coming up on ten year old technology ... having been announced in 1979; they would have been at least 2-3 generation old technology ... and so would have relatively out-of-date price/bit (so in that sense, 3370s probably weren't still "selling")

1979 3370 announcement reference for 4331, 4341 and system/38
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html

in the 3090, 3370s would have been used by the pair 4361s running highly modified version of vm370 release 6 ... that were being used as service processors.

FBA was significantly simpler device to deal with than all the vagaries of CKD ... so in that sense it would be an ideal availability/reliability device for use by service processor ... first 3310 FBA used by the 3081 service processor ... and then 3370 FBA used by the 3090 service processor. One of the FBA characteristics is that the whole fixed-block architecture genre could obtain profile characteristics from the device ... and use common support regardless of how device characteristics changed over time ... very similar to "SCSI" or many of the other fixed block architectures.

this chronology "50 years of hard drives"
http://www.pcworld.com/article/id,127105/article.html

lists 3370 as first drive to use thin-film heads.

I've posted before about some of the thin-film head work. Part of it was simulating "air-bearing" floating head characteristics ... originally by disk division people running simulation on the SJR 370/195 in bldg. 28. One of the problems with that was there was significant backlog workload for the 195 ... measured in multiple weeks for turn-around.

One of the benefits of getting operating systems onto the "stand alone" machines in the disk engineering and product test labs (bldg 14 & 15) ... was the engineers saw a significant increase in productivity since multiple concurrent testing could be done "on-demand" anytime the engineer needed to ... instead of having to wait for dedicated, stand-alone, scheduled test time. Lots of past posts mentioning doing operating system work in disk engineering and product test labs
http://www.garlic.com/~lynn/subtopic.html#disk

The other benefit was that even the most heaviest device testing tended to place only a couple percent cpu load on the machines. As a result, we had at our disposal significant amounts of processor time ... somewhat to use as we saw fit. The engineering and product test labs also tended to get the latest processors ... so saw some of the earliest 4341s and 3033s. So while 3033 was only about half the peak thruput of the 195 ... we could move the air-bearing simulation work over to the 3033 in bldg. 15 and give it almost unlimited amount of processing.

for a little drift ... using search engine looking for other thin-film head references ... I tripped over this (again, copies appear at a number of places on the web)
http://febcm.club.fr/english/information_technology/information_technology_4.htm

which includes the note about 8Feb83: "IBM announces the Object Code Only policy on all its products, including VM" ... i.e. where it was standard not only for vm370 to ship all source ... but maintenance was also shipped as source updates. there is recent reference here to waterloo/share tape having large body of source level changes for vm370 ... even at the time of the "OCO" announcement
http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history

other posts in this and related threads:
http://www.garlic.com/~lynn/2007d.html#48 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#52 CMS (PC Operating Systems)
http://www.garlic.com/~lynn/2007d.html#65 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#69 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#32 I/O in Emulated Mainframes
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#39 FBA rant
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#42 FBA rant

misc. past posts about the air-bearing simulation for thin-file (floating) heads
http://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
http://www.garlic.com/~lynn/2002j.html#30 Weird
http://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
http://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
http://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
http://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004b.html#15 harddisk in space
http://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
http://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
http://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#13 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006l.html#6 Google Architecture
http://www.garlic.com/~lynn/2006l.html#18 virtual memory
http://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
http://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?

Is computer history taught now?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taught now?
Newsgroups: alt.folklore.computers
Date: Sun, 04 Mar 2007 02:38:24 -0700
Bernd Felsche <bernie@innovative.iinet.net.au> writes:
People who use CFD (or other numerical methods) for real, tangible products understand its limitations.

Others seem to ernestly believe that they understand how the universe works and that they can model the real world with a few simple equations and a very, very large computer.


recent post referring to air bearing simulation for floating/thinfilm heads
http://www.garlic.com/~lynn/2007e.html#43 FBA rant

time spent/day on a computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Sun, 04 Mar 2007 03:22:02 -0700
Louis Krupp <lkrupp@pssw.nospam.com.invalid> writes:
I've worked in places where management would rather see me sitting around doing nothing than catch me doing something I hadn't been specifically told to do. Sometimes I wonder if it isn't a deliberate tactic: demoralize people so they're not working at their full potential and their continued employment depends more on the whim of the institution than on their own accomplishments. It's about control.

Past posts mentioning Boyd stories about issues with corporate america stemming from world war 2 ... where emerging corporate executives having gotten their training in running organizations as young officers.
http://www.garlic.com/~lynn/2001d.html#45 A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001m.html#16 mainframe question
http://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
http://www.garlic.com/~lynn/2004l.html#11 I am an ageing techy, expert on everything. Let me explain the
http://www.garlic.com/~lynn/2004l.html#34 I am an ageing techy, expert on everything. Let me explain the
http://www.garlic.com/~lynn/2004q.html#86 Organizations with two or more Managers
http://www.garlic.com/~lynn/2005e.html#3 Computerworld Article: Dress for Success?
http://www.garlic.com/~lynn/2006f.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#17 The Pankian Metaphor

the issue was that US had to rapidly deploy a large number of people with very little training and experience. the solution was to create a very rigid top-down command and control system to leverage the very few people with experience. Boyd contrasted this with the German army that had a large percentage of experienced professional soldiers. One number that Boyd used was that the German army was composed of three percent officers ... compared to something like twelve percent for the US army (large officer corp needed to implement rigid, top-down command and control system)

the strategy was then to prevail with large overwhelming forces and resources ... and strategy involving rigid, top-down command and control system ... very much focused on logistics managing and deploying (tightly controlled) massively superior (inexperienced) forces and resources.

other past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
other URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 04 Mar 2007 10:25:29 -0700
Steve Myers wrote:
The problem is EXCP access to the VTOC. There are a lot of programs that do that.

so there is a conversion period with some trailing support for old disks for extended period of time ... which would have been pretty much over if started in back in the early 80s.

also, could provide software simulation for CKD channel programs that refuse to die ... basically as side-effect of virtual-to-real channel program translation ... by whatever descendant of CP67's CCWTRANS currently running in MVS. recent posts about initial MVT->VS2 conversion was done by hacking CP67's CCWTRANS into the side of the MVT kernel
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history

For virtual machine support, CP67's CCWTRANS had to copy the virtual machine's channel program to a "shadow" channel program that reference real addresses. The same thing applies to channel programs from the applications virtual address space (that is loaded with virtual addresses) that need a real version of the original (virtual) channel program created for "real" execution.

While the original CP67 CCWTRANS somewhat did (nearly) a one-for-one conversion of each virtual CCW to a shadow/real CCW ... there were various enhancements over the years (by both internal accounts and customer accounts) that implemented full software simulation for special/custom devices. Such implementations could be similarly moved into the MVS kernel in a manner similarly to the way that CP67's CCWTRANS was initially moved into VS2.

The strict exception to the above are any system applications that run channel programs in "virtual equal real" where the requirement for translation & shadow channel programs are eliminated ... however, these should be a much smaller enumerated set of applications that would be in need of conversion.

Another example of such body of work is the various emulators that allow current generation 360-genre operating systems and various other processor architectures ... like Intel and other machines. These include similar capability to the original CP67 channel program translation ... and I believe most of them also implement CKD emulation on fixed-block real devices. In manner similar to migration of CP67's CCWTRAN channel program translator into VS2 kernel ... perform somewhat analogous migration of various software CKD channel program emulation into the MVS kernel.

The other body of code providing similar function is all the hardware controller functions emulating CKD operations on devices that are really fixed-block architecture of one form or another. Provide some flavor of this as part of conversion transition.

one of my early contentions that it would have been much simpler and easier and have much longer term benefits than the iteration involving adding support for ECKD. The need for any of the ECKD-type stuff would have mostly evaporated in transition of CKD to fixed-block architecture environment

post posts in this and related thread:
http://www.garlic.com/~lynn/2007d.html#48 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#65 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#69 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#39 FBA rant
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007e.html#42 FBA rant
http://www.garlic.com/~lynn/2007e.html#43 FBA rant

time spent/day on a computer

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Sun, 04 Mar 2007 15:10:43 -0700
jmfbahciv writes:
Why is this a criteria. Forcing your most productive people (these are not prima donas) to "get along" with the stupid is the stupidest thing to do. It is a waste of these productive people's time to spend hours witing the butts of the unproductive.

some point in the past there was an article about managers typically spending majority of their time attempting to make their least productive employees (helping the ones that need it the most) more productive.

the observation was that frequently employee productivity is quite skewed with a few people responsible for most of the productivity and a few people responsible for very little productivity.

then there is a choice for managers spending all their time aiding employees to be 50percent more productive ... should that be spent on the employees responsible for 10percent of the productivity (increasing overall productivity by 5percent) ... or spent on the employees responsible for 90percent of the productivity (increasing overall productivity by 45percent).

the conclusion is that the natural inclination has been to spend most of the time on the least productive people ... which results in the least overall organizational benefit.

time spent/day on a computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Sun, 04 Mar 2007 20:02:51 -0700
Andrew Swallow <am.swallow@btopenworld.com> writes:
Yes-men survive by boot licking the BOSS. Innovators are dangerous they may come up with something the BOSS does not like. Boot licker does not want the blame.

or how an Amdahl install and bucking the system cost me a career (or having been told there was no possibility of a career, then there was little reason not to buck the system, all far away and decades ago)

In the mid '81 time-frame, san jose merc news ran a series on pay scale in silicon valley and some reference that if a person had been with the same employer for more than ten years, they were (likely) being paid less than somebody who had changed employers 2-3 times.

I wrote a letter to management claiming that the situation especially applied to me, including some number of references (along with the SJMN articles).

About a month later, I got a written response back from the local HR head, saying that they had had done a complete review of my whole employment history and the only conclusion was that I was making just exactly what I should be making.

I then took my original letter, the HR response and wrote a new cover letter.

This time I explained that two months earlier I had been told that management had decided to form a new group that would work under my direction and HR had me interviewing new hires ... and I found out that HR was making offers to new hires (that I had been interviewing to work under my direction) that was 30percent higher than what I was currently being paid (not only less than somebody that had changed employers 2-3 times but also less than new hires).

I never got a response to that followup letter, but a month or so later I started getting a series of raises that over a three month period brought me up to the starting salary level being offered new hires that i had been interviewing.

This was somewhat behind my earlier reference to being told that the best we could possibly hope for was to not be fired and allowed to do it again ... earlier reference here
http://www.garlic.com/~lynn/2007.html#22

In that respect, I frequently identified with Boyd's comment about the Air Force ...
http://www.garlic.com/~lynn/2007.html#20

and more recent reference to Boyd
http://www.garlic.com/~lynn/2007e.html#45

or the reference to having been told that they could have forgiven me for being wrong, but they were never going to forgive me for being right.
http://www.garlic.com/~lynn/2007.html#26

and the comments here
http://www.garlic.com/~lynn/2007b.html#29

in later years, my wife would really annoy top executives by reminding them that I had never been wrong ... there may have been things that I had been wrong about ... but it wasn't anything that had ever been brought to their attention; unless you consider embarrassing/offending executives as "being wrong" ... or getting blamed for "Tandem Memos"
http://www.garlic.com/~lynn/2007d.html#17

for more Boyd drift ... past posts mentioning John Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
and various URL pages from around the web mentioning John Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

recent mention of offending the MVS RAS manager in a number of ways
http://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007e.html#33 IBM S/360 series operating systems history

previous mention of not taking the fall for somebody in a branch office that was good buddies with the CEO (or at least I was told that if I didn't do this for the CEO's good buddy, I could forget any thots of a career, promotions, raises, etc).
http://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
http://www.garlic.com/~lynn/2006b.html#31 Seeking Info on XDS Sigma 7 APL

the branch guy had apparently gravely offended the customer and (as a result) the customer was going to install the first red box in a commercial account (it wasn't the first red box install, but it would be the first at a large, commercial, premier, true blue commercial customer, putting a red box into a large machine room that had a vast sea of blue boxes). I was asked to go on-site at the customer for six months, essentially to obfuscate the issues, making it appear like there might actually be technical issues involved with the installation (as opposed to the customer trying to teach the company a lesson because of something the branch guy had said or done). I was fairly well acquainted with the people at the customer as well as the other people in that branch ... and got the same background story from both (there was nothing that was going to stop the customer installing the red box and it definitely had nothing at all to do with technical issues)

my spending six months at the account would have obfuscated installation issues (in the eyes of the rest of the corporation) about there possibly being technical considerations involved ... making the eventual (red box) installation appear as if it was a failure on my part and mitigate any corporate black marks given the branch guy (who actually was the motivation having so gravely offended the customer).

for a little other drift ... misc. past posts mentioning a talk at MIT ... where he was asked what business arguments did he use to justify funding for company making clone processors ... and he had a reply that could be construed as referring to the future system project ... i.e. something about even if IBM were to totally walk away from 360/370, there were enough customer applications to keep clone processors in business until at least the end of the century.
http://www.garlic.com/~lynn/2004l.html#51 Specifying all biz rules in relational data
http://www.garlic.com/~lynn/2005r.html#49 MVCIN instruction
http://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual
http://www.garlic.com/~lynn/2006w.html#34 Top versus bottom posting was Re: IBM sues maker of Intel-based Mainframe clones

the justification for future system was supposedly because of the clone controller/device business ... something i got (at least partially) blamed for with a project building a clone controller in the 60s as an undergraduate ... misc. past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

misc. past posts mentioning future system project
http://www.garlic.com/~lynn/submain.html#futuresys

some posts with specific quotes/comments about the days of future system
http://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
http://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
http://www.garlic.com/~lynn/2003p.html#25 Mainframe Training
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?
http://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual
http://www.garlic.com/~lynn/2006d.html#15 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006p.html#50 what's the difference between LF(Line Fee) and NL (New line) ?
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006w.html#2 IBM sues maker of Intel-based Mainframe clones

one of my (other) early career enhancing moves (!, bucking the system) was to pan the future system project as having something in common with a cult film that had a long run down in central sq ... and mentioning something about "inmates in charge of the institution" ... and continuing to work on 360/370 legacy stuff.

then when FS was finally canceled, there was mad frenzy trying to revive various kinds of relatively dormant 360/370 efforts (resources and strategy having been diverted to FS).

past posts mentioning various career enhancing (not) activities:
http://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd
http://www.garlic.com/~lynn/2006l.html#17 virtual memory
http://www.garlic.com/~lynn/2006o.html#51 The Fate of VM - was: Re: Baby MVS???
http://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives

Is computer history taught now?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taught now?
Newsgroups: alt.folklore.computers
Date: Mon, 05 Mar 2007 09:08:06 -0700
jmfbahciv writes:
Oh, but economic effects are delayed so that the finger pointing game is successful.

or at least the effects are obfuscated for some period ... one of the comptrollers proposals has been when legislation is claimed to have some specific benefit ... require measurement to establish whether the objectives are actually met.

recent post
http://www.garlic.com/~lynn/2007e.html#30 Health Care

comptroller from last night
http://www.cbsnews.com/stories/2007/03/01/60minutes/main2528226.shtml
http://www.cbsnews.com/stories/2007/03/01/60minutes/main2528226_page2.shtml
http://www.cbsnews.com/stories/2007/03/01/60minutes/main2528226_page3.shtml

Is computer history taught now?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is computer history taught now?
Newsgroups: alt.folklore.computers
Date: Mon, 05 Mar 2007 09:40:30 -0700
krw <krw@att.bizzzz> writes:
Be careful what you measure. You'll likely get it.

re:
http://www.garlic.com/~lynn/2007e.html#49 Is computer history taught now?

records and measurement is usually about the only way that you know whether something is actually doing something ... or at least doing what it is suppose to ... as possibly the exact opposite.

possibly viewpoint is from having done dynamic adaptive resource management as undergraduate in the 60s. a lot of the stuff in that era was all sorts of algorithms and academic literature ... w/o requiring corresponding instrumentation that validated whether theory match practice. information from instrumentation could actually be used as part of the dynamic adaptive resource management to improve operation. Simple analogy is insurance industry that collects statistics on severity of automobile accidents and uses it to identify safer cars ... but also can be used to investigate why some cars are safer ... and use that information to improve safety of all cars.

transition can sometimes encounter 3-monkey response (see, hear, speak or should it be an ostrich analogy?). In Jim's MIP Envy
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing

he mentioned STL attempting to improve productivity on grossly overloaded systems by limiting/controlling access to the system.
http://www.garlic.com/~lynn/2007d.html#email800920

I mentioned here that the restrictions had largely been removed when STL had installed my "group fairshare" extensions to my dynamic adaptive resource management
http://www.garlic.com/~lynn/2007c.html#email830709
in this post
http://www.garlic.com/~lynn/2007c.html#12

Initially when I offered the "group fairshare" enhancements to STL datacenter, they didn't want to be involved. The STL datacenter people were constantly being battered by lots of different competing organizations. With standard fairshare policy ... resources were doled out w/o regard to organization affiliation and the datacenter organization could disclaim all responsibility for how aggregate resources allocation was being done across the competing organizations. Introduction of "group fairshare" would just give the datacenter organization one more conflict resolution labor dealing with the competing organizations using the datacenter facilities. It took quite a bit of lobbying to convince them that the benefits of "group fairshare" would far outweigh the difficulty of resolving organizational political squabbling that might crop up (establishing specific group resource utilization goals for the different organizations).

misc. posts mentioning dynamic adaptive resource management
http://www.garlic.com/~lynn/subtopic.html#fairshare

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 05 Mar 2007 10:09:51 -0700
Andreas F. Geissbuehler wrote:
The venerable IBM 2321 A.K.A "the strip picker", the one responsible for the mbb in "mbbcchhr" -- did CP/67 or VM ever support the 2321 ?

at the univ. where i was undergraduate ... and doing a lot of enhancements to MFT and then MVT (lot of it associated with getting typically univ. workload running three times faster than what you would get with a vanilla os360 sysgen) ...

then the univ. was selected to be first early install for cp67 ... three people from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

coming out the last week of jan68 to install cp67. I then got to do a lot of performance enhancements to cp67 ... especially running MFT and then MVT in virtual machine ... as well as fixing any bugs in cp67 related to running os/360 in virtual machine. As before some posts mentioning a presentation that I gave at fall68 share meeting in Atlantic City on some work on os/360 performance enhancements, cp67 performance enhancements and enhancements running os/360 in virtual machines
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

now one of the other things, was the univ. was also selected to be one of the original beta-test sites for the first CICS product release. it was being used for project that univ had related to an ONR grant to the univ library for library automation. recent post mentioning having to do shoot some CICS bugs as part of that effort (original CICS had been developed at a customer account and appeared to have used a specific set of BDAM options, library automation project was using a different set of BDAM options and some of the bugs were related to CICS dataset OPEN with other than the originally used BDAM options)
http://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object

part of the library automation project was a 2321 datacell ... so i had to make sure it ran with MVT/cics/bdam ... as well as running under cp67.

lots of past posts happening to mention (early) CICS &/or bdam
http://www.garlic.com/~lynn/submain.html#bdam

US Air computers delay psgrs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: US Air computers delay psgrs
Newsgroups: alt.folklore.computers
Date: Mon, 05 Mar 2007 12:27:21 -0700
hancock4 writes:
Computerized airline systems were put in 45 years ago. You'd think by now they'd get it right. You'd also think by now they'd know how to transition from one system to another in an orderly fashion.

Note that when the company upgrades a central office, the change is seamless to callers.


in some cases there is enormous pressures to migrate off old reliable mainframe operations to newer generation technologies; in many cases the newer generation technologies have absolutely no familiarity with business critical dataprocessing. simple example is the large number of failed fed. gov. dataprocessing modernization projects that frequently ring-in around $1b/failure ... or more ... including several involving ATC over a period of a decade or more.

misc. past posts mentioning FAA ATC
http://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/2001h.html#15 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001h.html#17 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2001i.html#14 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001i.html#15 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001n.html#50 The demise of compaq
http://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
http://www.garlic.com/~lynn/2004.html#7 Dyadic
http://www.garlic.com/~lynn/2004e.html#5 A POX on you, Dennis Ritchie!!!
http://www.garlic.com/~lynn/2004l.html#42 Acient FAA computers???
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2005c.html#17 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005m.html#9 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?
http://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
http://www.garlic.com/~lynn/2006u.html#24 When did computers start being EVIL???
http://www.garlic.com/~lynn/2007d.html#52 CMS (PC Operating Systems)

misc. past posts mentioning airline res/operation systems; ACP/TPF/Amadeus/etc
http://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
http://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
http://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
http://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
http://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
http://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
http://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
http://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#28 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#32 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#37 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#48 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#51 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#58 Disk drive behavior
http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
http://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001g.html#24 XML: No More CICS?
http://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
http://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
http://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
http://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
http://www.garlic.com/~lynn/2001j.html#21 Parity - why even or odd (was Re: Load Locked
http://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
http://www.garlic.com/~lynn/2001k.html#17 HP-UX will not be ported to Alpha (no surprise)exit
http://www.garlic.com/~lynn/2001k.html#51 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001l.html#41 mainframe question
http://www.garlic.com/~lynn/2001n.html#0 TSS/360
http://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
http://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
http://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
http://www.garlic.com/~lynn/2002.html#16 index searching
http://www.garlic.com/~lynn/2002.html#52 Microcode?
http://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
http://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
http://www.garlic.com/~lynn/2002d.html#5 IBM Mainframe at home
http://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
http://www.garlic.com/~lynn/2002f.html#3 Increased Paging in 64-bit
http://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
http://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
http://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
http://www.garlic.com/~lynn/2002i.html#26 : Re: AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2002i.html#38 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2002i.html#83 HONE
http://www.garlic.com/~lynn/2002j.html#28 ibm history note from vmshare
http://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
http://www.garlic.com/~lynn/2002l.html#39 Moore law
http://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
http://www.garlic.com/~lynn/2002o.html#28 TPF
http://www.garlic.com/~lynn/2002p.html#50 Cirtificate Authorities 'CAs', how curruptable are they to
http://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
http://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2003c.html#30 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#67 unix
http://www.garlic.com/~lynn/2003e.html#17 unix
http://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
http://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
http://www.garlic.com/~lynn/2003g.html#37 Lisp Machines
http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2003k.html#36 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
http://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
http://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
http://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
http://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004k.html#37 Wars against bad things
http://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004l.html#62 Some Laws
http://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
http://www.garlic.com/~lynn/2004n.html#5 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
http://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
http://www.garlic.com/~lynn/2004p.html#15 Amusing acronym
http://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
http://www.garlic.com/~lynn/2005b.html#5 Relocating application architecture and compiler support
http://www.garlic.com/~lynn/2005b.html#6 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005c.html#23 Volume Largest Free Space Problem... ???
http://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005h.html#22 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
http://www.garlic.com/~lynn/2005j.html#17 Performance and Capacity Planning
http://www.garlic.com/~lynn/2005m.html#55 54 Processors?
http://www.garlic.com/~lynn/2005n.html#4 54 Processors?
http://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?
http://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005s.html#7 Performance of zOS guest
http://www.garlic.com/~lynn/2005s.html#38 MVCIN instruction
http://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006i.html#0 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
http://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
http://www.garlic.com/~lynn/2006o.html#56 Greatest Software Ever Written?
http://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
http://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006r.html#10 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006y.html#14 Why so little parallelism?
http://www.garlic.com/~lynn/2007.html#11 vm/sp1
http://www.garlic.com/~lynn/2007.html#44 vm/sp1
http://www.garlic.com/~lynn/2007d.html#15 Pennsylvania Railroad ticket fax service
http://www.garlic.com/~lynn/2007d.html#19 Pennsylvania Railroad ticket fax service

time spent/day on a computer

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Tue, 06 Mar 2007 09:04:03 -0700
Morten Reistad <first@last.name> writes:
Organisations do such suboptimal things all the time, and it gets radically worse when the organisation "sets in", like in Dilbert.

Knowing the problems, and knowing what antidotes to apply are two radically different issues.


re:
http://www.garlic.com/~lynn/2007e.html#47 time spent/day on the computer

the original article had some implicit assumptions about viability of organization in a competitive environment. in general, organizations that don't have any real need to be agile, adaptable and competitive ... will evolve other criteria for success ... again very boyd, recent references
http://www.garlic.com/~lynn/2007e.html#18 Is computer history taught now?
http://www.garlic.com/~lynn/2007e.html#48 time spent/day on the computer

and in this reference, effectively substituting enormous excess quantity for experience, skill, competence, quality
http://www.garlic.com/~lynn/2007e.html#45 time spent/day on a computer

I've posted before about being asked to come in and look at a large commercial, online application ... where they explained that they had ten impossible things that they couldn't do.

we went away and came back with a new implementation in two months that was able to do all of the "ten impossible" things. the problem was that a major reason that the ten things were impossible was that the existing implementation had something like 800 people involved in the care and feeding of the application. In order to address all ten impossible things ... effectively the manual tasks that were being performed by those 800 people were automated (and there was no longer any requirement for those 800 people).

This appeared to result in a lot of angst and hand-wringing among the executives involved ... which went on for nearly a year. They finally got around to telling us that they hadn't actually intended for us to solve the problem ... what they wanted to do was hire us for consulting and be able to tell the parent company's board that they had the "wheelers" consulting for the next five years. They were effectively able to obfuscate the issue that we actually had an implementation and sort of allow the details to evaporate over time.

Part of the issue here (and has been mentioned in other posts in the thread) that the executives involved actually had importance and compensation proportional to the size of the organization (in terms of number of people, not necessarily gross revenue or profit).

past posts mentioning the "ten impossible" things effort
http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
http://www.garlic.com/~lynn/2001k.html#17 HP-UX will not be ported to Alpha (no surprise)exit
http://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
http://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)

we've seen something similar happened before ... and at some point get around to mentioning (before we start) the line about "careful what you ask for" ... since there are likely to be unanticipated consequences when we go in to figure what is really going on with something.

there has been this joke in the valley about there actually only being 200 people in the industry ... they just keep moving around. in this case the person referenced in this old email as having left STL and was then at a financial institution
http://www.garlic.com/~lynn/2007.html#email801016

had moved on and was a member of the "board" mentioned in the above narrative (i.e. the "board" that the executive just wanted to report that they had the "wheelers" in long term consulting contract looking at the "problem").

time spent/day on a computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Tue, 06 Mar 2007 09:33:24 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
we went away and came back with a new implementation in two months that was able to do all of the "ten impossible" things. the problem was that a major reason that the ten things were impossible was that the existing implementation had something like 800 people involved in the care and feeding of the application. In order to address all ten impossible things ... effectively the manual tasks that were being performed by those 800 people were automated (and there was no longer any requirement for those 800 people).

re:
http://www.garlic.com/~lynn/2007e.html#53 time spent/day on a computer

footnote/clarification: that wasn't a design for a new implementation ... it was the actual implementation ... where we demonstrated its ability to do all "ten impossible" things.

time spent/day on a computer

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time spent/day on a computer
Newsgroups: alt.folklore.computers
Date: Tue, 06 Mar 2007 10:29:07 -0700
krw <krw@att.bizzzz> writes:
The assumption here is that the manager *can* help the productive person as much as the non. IME the best thing a manager can do for the productive employee is get lost (and cancel all "status beatings").

re:
http://www.garlic.com/~lynn/2007e.html#47 time spent/day on a computer

there can be significant problems with 1st/2nd line managers that think they are in charge ... there to tell people what to do ... as opposed to there to assist/help people getting their tasks done.

for the class of managers that think they are in charge ... it is sometimes a toss-up which ones are worse ... the ones that know they don't know what they are doing ... or the ones that think they know what they are doing.

however, i claim that this again goes back to (Boyd's theme) of being a left over legacy from world war 2
http://www.garlic.com/~lynn/2007e.html#45 time spent/day on a computer

with regard to training young officers about how to run a very large organization and the requirement for a very rigid, top/down command and control system. what somewhat got lost over the years, was the reason behind the very rigid, top/down command and control system ... that it was necessary to effectively deal with the deployment of very large numbers of people with no experience and skills (at least for the tasks at hand).

later you start to have these young officers, thoroughly indoctrinated in rigid, top/down, command and control systems ... showing up as executives in commercial corporations.

Grilled Turkey

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Grilled Turkey
Newsgroups: alt.folklore.computers,soc.culture.russian
Date: Tue, 06 Mar 2007 12:22:50 -0700
Morten Reistad <first@last.name> writes:
In Scandinavia 1986 was the "year of no net", because there were significant dips in funding. I was there in the meeting to "fix the 'net", where funds were put on the table. Dual 256k lines were ordered, and the (in AUP terms commercial) unix user groups were prepared to fund 15% of this. There was, however, not enough usenet traffic at the time to fill 15% of a 512k pipe, and direct IP access was out because of international AUP problems.

The meeting took place at the premises of the chamber of commerce.

The balance was filled by having national software archives of open source software on university grounds and international connectivity, but independently funded, and therefore accessible by all. This was deemed to be a workable solution to the AUP problems.

These archives were how Linux came to be, 4 years later.

Four years later the state audit board nixed the international AUP obligations, and forced a commercial Internet.

The value of the Internet was well known in influential circles already in 1986, and the AUP was already then seen as a big problem.


there was a different kind of problem (at least in the states) with significant amount of dark (i.e. unlit, unused) fiber in the early and mid 80s ... and an enormous chicken&egg situation. the operating companies had use-based charging ... but relatively fixed significant run-rate. if they reduced the used-based charging by an order of magnitude or two ... it would probably be a decade or more before the bandwidth hungry applications saw significant enough deployment that their use-based bandwidth revenue caught up with their (significant) fixed run-rate.

we were fortunate that we were given the liberty for deploying T1 and greater high-speed backbone in this period. misc. old email with some mention of the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
http://www.garlic.com/~lynn/lhwemail.html#hsdt

and general posts with some mention of nsfnet
http://www.garlic.com/~lynn/subnetwork.html#nsfnet
the Internet
http://www.garlic.com/~lynn/subnetwork.html#internet
BITNET (and/or earn in Europe)
http://www.garlic.com/~lynn/subnetwork.html#bitnet
and/or our high-speed data transport project
http://www.garlic.com/~lynn/subnetwork.html#hsdt
and the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

the solution appeared to be to take advantage of the nsfnet internetworking effort and donate significant bandwidth to the project (well in excess for what the government was paying for). this could create a significant incubator for disruptive, bandwidth hungry applications and usage in a constrained, controlled, limited environment ... avoiding impacting the general commercial use-based charging revenue. the operators could possibly even take a tax write-off for the donated bandwidth and resources (that possibly otherwise were going unused).

there has been some past discussions implying that the AUPs were government motivated. However, I believe that there could also be a case made that the AUPs were heavily motivated by commercial interests wanting to maintain the control limitations on their testbed incubator environment ... and then be able to stage the deployment of the disruptive bandwidth hungry applications in such a way that they maintained their use-based charging revenue in-sync with their run-rate costs.

a few past posts with AUP references and/or even copies of some AUPs
http://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
http://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
http://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
http://www.garlic.com/~lynn/2006j.html#45 Arpa address

one of the characteristics of a lot of BITNET and EARN was not only was it using a lot of the corporate commercial protocol ... but the company was also directly funding a lot of the lines/bandwidth. In this case, it might be viewed as incubator for the new next generation disruptive applications evolving on the corporations computers.

the caveat here was that the core internal network technology had (also) originated at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

and had a non-traditional (for the time) layer separation between the core networking function and all the transport/networking function ... which made it relatively trivial to do things like gateways (something that the Arpanet/internet didn't get until the great 1jan83 switch over to internetworking protocol).

there was also much more traditional "networking" protocol available on the corporations "batch" operating systems. This was much more of traditional integrated environment implementation (common in the 60s and 70s) and somewhat evolved out of the TUCC implementation done on HASP ... and then migrated to JES2 as NJI. Various past posts about HASP, JES, NJI, etc
http://www.garlic.com/~lynn/submain.html#hasp

the core internal network developed "NJI" gateway drivers to interoperate with these platforms. The "NJI" implementation suffered some significant problems

1) max network nodes less than 256 .. and the internal network had quickly exceeded 256 nodes ... and NJI wasn't enhanced to handle 999 nodes until well after the internal network had exceeded a thousand nodes the summer of '83.
http://www.garlic.com/~lynn/internet.htm#22
and the internal network exceeded 2000 nodes well before the NJI support got around to being enhanced to support 1999 nodes.

2) slightly different releases of NJI wouldn't interoperate, frequently resulting in networking software failure which in turn would bring down the batch (MVS) operating system. This problem became so severe that (real) NJI nodes were restricted to boundary nodes behind a node based on the corporations workhorse networking support. Then a large library of NJI gateway software grew-up for the corporations core networking support that basically would convert "real" NJI format to a canonical form ... and then when talking on a direct link to a "real" NJI node ... be responsible for converting it to specific format to what was required by that specific release and version.

3) possibly to simplify customer perception of the corporations networking support ... BITNET and EARN was pretty much restricted to always being "NJI drivers" ... even between networking nodes that weren't really JES/NJI ... rather than deploy the much more capable and efficient native drivers.

Health Care

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Health Care
Newsgroups: alt.folklore.computers
Date: Tue, 06 Mar 2007 21:21:05 -0700
Roger Blake <rogblake10@iname10.com> writes:
And this has precisely what to do with computer folklore??

re:
http://www.garlic.com/~lynn/2007e.html#30 Health Care

if you been following alt.folklore.computers ... then you will have noticed there is frequent topic drift including a very long running topic drift from a year ago where the subject of comptroller came up several times. the post you referenced was an update mentioning that the comptroller is effectively still talking about the same thing he was talking about a year ago ... as the URLs in the rest of the post you referenced shows
http://www.garlic.com/~lynn/2007e.html#30 Health Care

the URLs in the above referenced post ..... referencing posts in wide ranging thread(s) from a year ago
http://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
http://www.garlic.com/~lynn/2006o.html#61 Health Care
http://www.garlic.com/~lynn/2006p.html#17 Health Care
http://www.garlic.com/~lynn/2006t.html#26 Universal constants

and for additional background ... other posts in the thread from a year ago ... not included in the (above) referenced collection (mentioning the controller):
http://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#38 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#39 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#7 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#9 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#10 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#42 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#43 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#2 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#3 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#7 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#8 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#16 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#17 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#24 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#25 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#28 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#29 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#50 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#51 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#52 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#53 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#54 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#55 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#1 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#5 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#8 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#16 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#18 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#23 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#24 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#34 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#39 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#45 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#56 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#59 The Pankian Metaphor
http://www.garlic.com/~lynn/2006i.html#0 The Pankian Metaphor
http://www.garlic.com/~lynn/2006i.html#2 The Pankian Metaphor
http://www.garlic.com/~lynn/2006i.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#38 The Pankian Metaphor
http://www.garlic.com/~lynn/2006k.html#13 The Pankian Metaphor
http://www.garlic.com/~lynn/2006k.html#14 The Pankian Metaphor
http://www.garlic.com/~lynn/2006m.html#49 The Pankian Metaphor (redux)

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Wed, 07 Mar 2007 10:27:31 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Identity Fraud: ID Theft Victims, Losses Take Welcome Nosedive
http://www.banktechnews.com/article.html?id=20070226T5LTLE8K

there has been some effort by FTC and others to differentiate types of ID Theft ... at least into 1) Identity Fraud (where identity information is used to do various things like opening new accounts) and 2) Account Fraud (fraudulent transactions against existing accounts).

The above article refers to Identity Theft loses dropping to $49.3billion in 2006 from $55.7billion in 2005.

However, it goes on to say that most of that improvement comes from better processing of requests to open new accounts.

while:
... "existing accounts have the highest average fraud" ... $7,560 and "the average consumer cost from fraud rose sharply, from $431 to $535".


re:
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007

data aggregation which at the same time can show both reduction and sharp rise

over the past couple weeks:
ID fraud down, except credit cards
http://www.pcadvisor.co.uk/news/index.cfm?newsid=8280
Survey: ID fraud in U.S. falls by $6.4B
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9010082&intsrc=hm_list
Survey Indicates ID Theft May Be Diminishing
http://yro.slashdot.org/yro/07/02/01/2127224.shtml
Study: ID fraud in decline
http://www.securityfocus.com/brief/423
US ID theft losses decline
http://www.astalavista.com/?section=news&cmd=details&newsid=3376
US ID theft losses decline
http://www.theregister.com/2007/02/05/us_id_fraud_survey/


and today:
ID Theft Is Exploding In The U.S.
http://www.informationweek.com/news/showArticle.jhtml?articleID=198701579
ID fraud soaring across the pond
http://www.silicon.com/financialservices/0,3800010322,39166236,00.htm


FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 07 Mar 2007 17:29:36 -0700
Shmuel Metz , Seymour J. wrote:
Not quite; the 370/168 had two type of block multiplexor channel, single byte and 2 byte. The 2 byte channel ran at 3 MB/s. The 3880 only supported the single byte channel, which was less expensive.

re:
http://www.garlic.com/~lynn/2007e.html#40 FBA rant

about the only thing that i remember that would use the 2byte/3mbyte/sec channel was the 2505-1 fixed head disk.
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

2505-2 was 1.5mbyte/sec, 11.2mbyte capacity, 5millsecond avg. rotational delay

2505-1 was 3mbyte/sec and 5.4mbyte capacity ... heads were positioned i.e. so the avg. rotational was cut in half from 5milliseconds to 2.5millseconds, data transfer rate doubled from 1.5mbytes/sec to 3mbyte/sec, and data capacity cut in half from 11.2mbyte to 5.4mbyte.

I never actually knew of customer with either 2305-1 and/or 3mbyte/sec (2byte) channel

Nominally bus&tag cable channel was rated at 200ft max. distance ... but even 2505-2 (at 1.5mbyte/sec) could present problems. We had 370/158 that had troubles with (2305-2 controller) 2835 down around 50ft that was fixed by configuration with shorter cable.

Data streaming (reduce the requirement end-to-end synch on every byte) allowed for both higher data rate as well as doubling maximum channel cable distances.

Slightly related from long ago and far away ... effectively "native mode" for electronic paging disk is very close to FBA kind of operation (w/o the rotational delay)

Date: 08/05/82 16:17:32
From: wheeler

re: intel drums; Native mode operation has the same performance as 2305 simulation ... not faster, no slower.

However, in native mode all 12meg worth of drum is used as data blocks. In 2305 simulation mode, only that amount of formated space is used for data blocks. VM uses a format which only utilizes approx. 9.5meg worth of data blocks (the rest is inter-record gaps and dummy block spacers to optimize slot sorting). The result is native mode represents about a 30% increase in drum space (an 1655 box with 4 simulated 2305s becomes the equivalent of 5.3 2305s in native mode).

They have been saying they would have a 3meg. data streaming option available by August for 1655. That would mean twice the data transfer rate compared to either a real 2305 or an 1655 simulated 2305. I haven't confirmed it, but it was my understanding that 3meg. data streaming would be available for either 2305 or native mode.

SJRLVM1, SJEVM5, and at least one machine in STL are running 1655s (48 meg./4 drum) in 2305 mode. They are all 1.5meg. versions. In addition, SJRLVM1 has a data streaming STC 2-drum electronic device (3 megabytes) ... & the STC drums don't have a native mode option. We also have a combination of real 2305s and 3380s and are in the process of running various performance comparisons.

Note: at 1.5meg. mode, an electronic drum has the same maximum thru-put capacity as a 2305 drum ... under VM at maximum load, there are long CCW chains transfering multiple page requests in one SIO operation. The data transfer is the same, so the electronic drums don't buy anything there. It is in the area of average access time that electronic drums improve performance. A 2305 drum has a 5 milliscond avg. rotational delay (access delay) per SIO. An electronic drum has avg access delay of 300-400 microseconds (approx. 1/20th of a 2305). Time to transfer one page is approx. 2.7 milliseconds for either devices at 1.5meg. transfer. For long CCWS chains with one rotational delay per 20-30 pages transfered performance is about the same:


chain    2305     stc-or-1655@1.5      stc-or-1655@3
  size    elapsed       elapsed             elapsed

1 page    7.7mills    3.0mills            1.6mills
2 page   10.4mills    5.7mills            2.9mills
 5 page   18.5mills   13.9mills             7 mills
10 page   32  mills   27.4mills           13.9mills
20 page   58  mills   54.4mills           27.4mills

On a moderately loaded, page bound system, electronic drums can significantly improve the paging performance.

... snip ... top of post, old email index

above comment about moderately loaded, page bound system ... i.e. w/o long page CCW chains so real rotational delay was significant part of service time per page transfer.

As part of my resource manager shipped in the mid-70s ... I had also shipped page "migration" support (pages that became inactive on "higher speed" devices would be migrated to "lower speed" devices) ... improving effectiveness of higher speed devices for paging operation.

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 07 Mar 2007 18:01:22 -0700
Anne & Lynn Wheeler wrote:
As part of my resource manager shipped in the mid-70s ... I had also shipped page "migration" support (pages that became inactive on "higher speed" devices would be migrated to "lower speed" devices) .... improving effectiveness of higher speed devices for paging operation.

re:
http://www.garlic.com/~lynn/2007e.html#59 FBA rant

for further drift ... with regard to resource manager and various strategies supporting paging devices ...

various "resource manager" posts here ... I had originally done dynamic adaptive resource manager for cp67 as an undergraduate in the 60s. it was frequently referred to as "fair share" scheduler because the default resource policy was "fair share".
http://www.garlic.com/~lynn/subtopic.html#fairshare

Most of the resource manager implementation had been dropped in the morph from cp67 to vm370, but i then got an opportunity to reintroduce it as independent resource manager product. The resource manager also got selected to be the guinea pig for kernel priced software (which had still been free up until then) ... misc. posts mentioning evolution of unbundling and charging for software
http://www.garlic.com/~lynn/submain.html#unbundle

not too long after I shipped the resource manager ... is when I really started noticing significant shift in both processor speeds and real storage sizes vis-a-vis conventional disk technology. One of the issues was real storage sizes was starting to rival or exceed the sizes of conventional "high-speed" paging devices. It was then when I first implemented dynamic adaptive switching between "duplicate" and "no-duplicate" strategies ... discussed in this post in conjunction with IRONWOOD (3880-11 controller disk cache)
http://www.garlic.com/~lynn/2007e.html#42 FBA rant

but I originally implemented for 2305 fixed-head paging drums. Duplicate/no-duplicate support didn't ship to customers, but saw some pretty wide deployment internally.

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 08 Mar 2007 07:40:07 -0700
jmfbahciv writes:
Is it time to distinguish the kinds of lack of ID validations? All this news frenzy doesn't address the missing element that seems to have disappeared in financial transaction processes. It appears that the single point failures are due to banks not having to verify extractions of accounts. This came as a side effect of eliminating a paper trail. The paper trail involved humans at each node of the process. Their decisions and actions cannot be predicted by the crooks so it is more labor intensive to try to steal that way. Now they can make predictions and actions no longer have to be on site.

Making bit flows more efficient is not necessary the correct action to increase effectiveness.


paper trail has more to do with who is at fault ... that how it is done.

as mentioned in previous posts
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#58 Securing financial transactions a high priority for 2007

there has been some work on refining taxonomy ... at least separating fraudulently open a new account and fraudulently performing transaction against existing account. then look at threat/attack modes for the different operations.

a lot of opening new account fraud is not performing sufficient validation.

a lot of fraud against existing accounts is advances in technology where attackers can counterfeit existing mechansims that were supposed to validate/authenticate correct transactions.

more human actions are only useful if their failure modes are independent. when downstream human actions are dependent on previous human actions ... they aren't independent and become less useful as countermeasure.

one of the most critical points is the transaction origin. in days of small town when everybody knew everybody else ... the point-of-sale was less vulnerable ... because the person accepting the transaction would know a lot about the person generating the transaction. as environment became more complex ... there was lower & lower probability that the person accepting the original transaction has any familiarity with the person generating the transaction. It isn't number of people involved and/or paper trail ... that familiarity at point-of-sale is gone. Non-face-to-face transactions then really exacerbate the problem ... like MOTO-transactions (mail-order, telephone order) and more recently Internet.

As a result, their is more and more dependency on other mechanisms for authenticating transactions (than personal knowledge). The problem here is that most of the technologies in use are decades old ... and the attackers have significantly more advanced and current technologies for counterfeiting authentication (even paper trail scenarios).

As I've frequently mentioned before, in the mid-90s the x9a10 financial standard working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the result was the x9.59 financial standard targeted at all retail payments
http://www.garlic.com/~lynn/x959.html#x959

There was look at attack/threat models across the whole end-to-end transaction process. This included looking at lost/stolen scenarios, skimming/harvesting attacks, replay attacks, man-in-the-middle attacks, etc ... as well as various counterfeiting technologies.
http://www.garlic.com/~lynn/subintegrity.html#harvest
http://www.garlic.com/~lynn/subintegrity.html#mitm
http://www.garlic.com/~lynn/subintegrity.html#secrets

The x9.59 standard only specifies what is carried as part of the x9.59 transaction ... however it doesn't specify what is required to originate that transactions ... although certain things are implicit. With regard to attacks on the transaction origin environment, there are some statements about security/integrity proportional to risk (i.e. transactions for millions of dollars are assumed to require higher level of integrity than transactions involving a few cents).

As direct personal knowledge has become less prevalent (among people processing the transaction), there is more and more of a need to rely on other forms of authentication and better understand basic principles of authentication. One such taxonomy is 3-factor authentication
http://www.garlic.com/~lynn/subintegrity.html#3factor

something you have (chips, cards) • something you know (pins, passwords, mother's maiden name, SSN) • something you are (biometrics, fingerprints)

There has also been some assumptions that multi-factor authentication is more secure assuming that the different factors have independent attacks. For instance, pins are assumed to be a countermeasure to lost/stolen cards. PINS are only independent authentication if the owner hasn't written the PIN on the card. As people have had to deal with increasing amount of something you know authentication and a multitude of PINS and passwords, many find it nearly impossible to remember which PIN goes with which card. As a result, one estimate is that 30percent of the cards have PINs written on them.

As technology has advanced, skimming and/or harvesting has become more and more of a treat. The representation of what makes a card unique and useful for something you have authentication has been the magstripe. Skimming/harvesting has been able to collect the information necessary to create a counterfeit magstripe ... frequently this is done at the point a valid transaction is performed. Futhermore since the magstripe and any PIN-entry is performed at the same time, both can be skimmed. While PINs may be assumed to be a countermeasure to lost/stolen card (as independent multi-factor authentication, modulo writing the PIN on the card), skimming can collect both the magstripe information and PIN entry at one time ... representing a common vulnerability and negating the assumption about multi-factor authentication having independent attacks.

Another scenario is the yes card scenario ... where the in-use chip technology would make it impossible to use a lost/stolen card and also nearly impossible to counterfeit (compared to a magstripe) ... aka the "chip/pin" scenario is countermeasure to lost/stolen card (again modulo people writing pin on the card).
http://www.garlic.com/~lynn/subintegrity.html#yescard

However, the chip presented nearly the same information (as found in a magstripe). The attackers could use nearly the same skimming technology (used in skimming magstripe information) to capture the (static) authentication information. They could then take a chip that they programmed themselves and install the skimmed information. The attackers then could present the counterfeit chipcard ... and the terminal would validate the (skimmed) authentication information. The terminal would then ask the chipcard if the correct PIN was entered. A counterfeit yes card would always answer YES (regardless of what was entered). In the yes card scenario, the attacker doesn't even have to skim both the magstripe information as well as the PIN ... the attacker only has to skim the equivalent of the magstripe information (somewhat resulting in somebody's quote that they managed to spend billions of dollars to prove that chips are less secure than magstripe).

the issue was that the chipcard was viewed somewhat as a countermeasure to purely lost/stolen card ... and didn't do anything to address the growing skimming threats (and in some respect the chips were actually were more vulnerable to skimming than magstripe).

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 08 Mar 2007 07:55:16 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
data aggregation which at the same time can show both reduction and sharp rise

over the past couple weeks:

ID fraud down, except credit cards
http://www.pcadvisor.co.uk/news/index.cfm?newsid=8280
Survey: ID fraud in U.S. falls by $6.4B
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9010082&intsrc=hm_list
Survey Indicates ID Theft May Be Diminishing
http://yro.slashdot.org/yro/07/02/01/2127224.shtml
Study: ID fraud in decline
http://www.securityfocus.com/brief/423
US ID theft losses decline
http://www.astalavista.com/?section=news&cmd=details&newsid=3376
US ID theft losses decline
http://www.theregister.com/2007/02/05/us_id_fraud_survey/

and today:

ID Theft Is Exploding In The U.S.
http://www.informationweek.com/news/showArticle.jhtml?articleID=198701579
ID fraud soaring across the pond
http://www.silicon.com/financialservices/0,3800010322,39166236,00.htm


and latest:
Identity Theft Jumps By 50 Percent
http://www.securitypronews.com/insiderreports/insider/spn-49-20070308IdentityTheftJumpsBy50Percent.html


apparently the same data can result in nearly exact opposite headlines.
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#58 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#61 Securing financial transactions a high priority for 2007

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 08 Mar 2007 08:49:27 -0700
Andreas F. Geissbuehler wrote:
The venerable IBM 2321 A.K.A "the strip picker", the one responsible for the mbb in "mbbcchhr" -- did CP/67 or VM ever support the 2321 ?

re:
http://www.garlic.com/~lynn/2007e.html#51 FBA rant

addenda and more topic drift.

the reason given for periodically roping me into playing disk engineer was that so many senior engineers had departed (lots in the departure to memorex and then various others including later departures to storage tek). they needed somebody that understood i/o architecture at a high level ... which had been the responsibility of the senior engineers that had left. i got pulled into it because of having to get into the guts of i/o architecture as part of being able to make the virtualization work correctly ... and then i got pulled into other areas. lots of past posts mentioning getting to play in disk engineering and product test labs
http://www.garlic.com/~lynn/subtopic.html#disk

recent post in this thread mentioning getting to play disk engineer
http://www.garlic.com/~lynn/2007e.html#40 FBA rant

I did later run into one of the disk engineers that had been part of the group that had departed for memorex and claimed to have done a lot of the work on 2321. He and some others at memorex had then left and founded their own company that did hardware database engine. They had picked up a CTO out of Berkeley. When the CTO left their company for Teradata (and then later founded his own rdbms company), they came around san jose plant site looking to backfill the CTO position (primarily from people working on system/r) . Similar to the stories told about Shugart recruiting around san jose plant site (after he had left) ... but on much smaller scale (and they did manage to catch somebody to backfill the CTO position). lots of past posts mentioning system/r
http://www.garlic.com/~lynn/submain.html#systemr

and other discussions around that era from the '95 SQL reunion:
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Teradata.html

and recent post in original thread (from which this thread spawned)
http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history

misc. past posts mentioning shugart, floppy disks, departing for memorex, etc
http://www.garlic.com/~lynn/2000.html#9 Computer of the century
http://www.garlic.com/~lynn/2002.html#17 index searching
http://www.garlic.com/~lynn/2002l.html#50 IBM 2311 disk drive actuator and head assembly
http://www.garlic.com/~lynn/2004.html#5 The BASIC Variations
http://www.garlic.com/~lynn/2004j.html#36 A quote from Crypto-Gram
http://www.garlic.com/~lynn/2004l.html#14 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
http://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005c.html#9 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005h.html#37 Software for IBM 360/30
http://www.garlic.com/~lynn/2006n.html#30 CRAM, DataCell, and 3850
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?

FBA rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA rant
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 08 Mar 2007 10:43:09 -0700
IBM Mainframe Discussion List wrote:
The 2321 was only responsible for the bb part of the mbbcchhr. The m was, and still is, the extent number in the DEB.

Bill Fairchild Plainfield, IL


i.e. seek CCW has six byte length ... bbcchh

CKD DASD command codes ... from my q&d conversion of gcard ios3270 to html (seek has count/length of "6"):
http://www.garlic.com/~lynn/gcard.html#26.1

pictures of 2321 here:
http://members.optushome.com.au/intaretro/2321DCD.htm
http://www.columbia.edu/cu/computinghistory/datacell.html

in the above you can see "bin" closeup ... "BB" selection could rotate the whole cylinder to place correct "bin" under the read/write heads ... rotation somewhat would reminded me of washing machine.

past posts mentioning 2321 in this thread:
http://www.garlic.com/~lynn/2007e.html#51 FBA rant
http://www.garlic.com/~lynn/2007e.html#63 FBA rant

there is some slight physical packaging resemblance between 2321 and 2301 drum ... both appearing to be cylinders. 2301 drum can be seen here in mid-background to the right of the tape-drives.
http://www.columbia.edu/cu/computinghistory/2311.html

another system picture with 2301 in mid-background
http://web.archive.org/web/20040428030324/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/29.jpg

close-up here
http://www.columbia.edu/cu/computinghistory/drum.html

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 08 Mar 2007 12:57:11 -0700
jmfbahciv writes:
Is it time to distinguish the kinds of lack of ID validations? All this news frenzy doesn't address the missing element that seems to have disappeared in financial transaction processes. It appears that the single point failures are due to banks not having to verify extractions of accounts. This came as a side effect of eliminating a paper trail. The paper trail involved humans at each node of the process. Their decisions and actions cannot be predicted by the crooks so it is more labor intensive to try to steal that way. Now they can make predictions and actions no longer have to be on site.

Making bit flows more efficient is not necessary the correct action to increase effectiveness.


there are a couple related issues to the automation of processing paper-based financial transactions as well as migration to electronic.

one is corrollary to expansion of the telephone system ... and some observation early in the last century that if there was to be any significant expansion, they would need half the population of the country as telephone operators ... it wasn't until they changed the paradigm and allowed each individual to be their own "operator" (with the assistance of the appropriate electronics). something analogous was made regarding the growth in use of paper checks ... requiring significantly larger work force to handle the growth in the amount of paper.

another issue as paper check use grew ... was they became more an more prevalent in non-local, non-face-to-face transactions. there was on the order of 30,000 financial institutions that were accepting paper checks and potentially having to route the paper to any one of the 30,000 other financial institutions for settlement. One one point the federal reserve imposed a penalty on the accepting institution if it took more than a week to get the paper to the paying institution for settlement.

w/o a lot of automation, there was no way any of this could have happened (analogous to the scenario for proliferation of telephones, it would not have been possible w/o automation of the call connection process).

recent posts:
http://www.garlic.com/~lynn/2007e.html#28 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#58 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#61 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#62 Securing financial transactions a high priority for 2007

there is another way of judging the magnitude of the issue. several years ago somebody gave me a copy of a several hundred page document that sliced and diced the finances and costs related to financial institutions. the numbers were grouped by line item with an avg. for the top 20 regional financial institutions against the avg. for the top 10 national financial institutions (60 lines/items per page, couple hundred pages). After leafing thru it for several minutes, one of the things that became apparent was that (on the avg) regional financial institutions appeared to be more profitable than the national financial institutions. Unfortunately there was no analysis in the document, just a lot of slicing and dicing of the raw numbers.

After 10-15 more minutes of leafing thru the pages, I found a page that appeared to have something of interest. It gave the avg. cost (to the institution) for electronic handling of a financial transaction versis manual/paper handling of a financial transaction (nearly the same for both types of institutions). It then gave the avg. percentage of transactions that were electronic vis-a-vis manual/paper. Now, (nearly) all the numbers in the whole document appeared to have very little differentiation between regional and national institutional avgs. However, regional institutions had a significantly higher avg. percentage of electronic transactions than national institutions. If you quickly did the math in your head ... this percentage difference appeared to be the only signficant differentiator accounting for regional institutions being more profitable than national institutions.

A possible conjecture was that regional institutions spent somewhat more effort in the accounts they had ... and promoting the use of electronic transactions. National institutions (potentially having some efficiencies with larger scale of oprations) appeared to have been more indiscriminate in acquiring accounts and the associated larger number of non-electronic transactions was sufficient to make them less profitable. It wasn't that the regional institutions were more personable and manual oriented that made them more profitable ... but just the opposite ... they had managed to convert a larger number of financial transactions from manual to electronic.



previous, next, index - home