List of Archived Posts

2001 Newsgroup Postings (01/01 - 01/19)

First video terminal?
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
A new "Remember when?" period happening right now
First video terminal?
Sv: First video terminal?
Sv: First video terminal?
Disk drive behavior
IBM Model Numbers (was: First video terminal?)
finding object decks with multiple entry points
tweak ui & right click pop-up window
Review of Steve McConnell's AFTER THE GOLD RUSH
IBM 1142 reader/punch (Re: First video terminal?)
Small IBM shops
Review of Steve McConnell's AFTER THE GOLD RUSH
IBM Model Numbers (was: First video terminal?)
IBM Model Numbers (was: First video terminal?)
IBM 1142 reader/punch (Re: First video terminal?)
IBM 1142 reader/punch (Re: First video terminal?)
Disk caching and file systems. Disk history...people forget
Disk caching and file systems. Disk history...people forget
Disk caching and file systems. Disk history...people forget
Disk caching and file systems. Disk history...people forget
Disk caching and file systems. Disk history...people forget
stupid user stories
stupid user stories
stupid user stories
Disk caching and file systems. Disk history...people forget
VM/SP sites that allow free access?
Competitors to SABRE?
Review of Steve McConnell's AFTER THE GOLD RUSH
Review of Steve McConnell's AFTER THE GOLD RUSH
Review of Steve McConnell's AFTER THE GOLD RUSH
Competitors to SABRE?
Where do the filesystem and RAID system belong?
Competitors to SABRE?
Where do the filesystem and RAID system belong?
Where do the filesystem and RAID system belong?
Competitors to SABRE?
Competitors to SABRE?
Life as a programmer--1960, 1965?
Disk drive behavior
Where do the filesystem and RAID system belong?
Where do the filesystem and RAID system belong?
Life as a programmer--1960, 1965?
Options for Delivering Mainframe Reports to Outside Organizat ions
what is UART?
Small IBM shops
What is wrong with the Bell-LaPadula model? Orange Book
Competitors to SABRE?
Options for Delivering Mainframe Reports to Outside Organizat ions
What exactly is the status of the Common Criteria
Competitors to SABRE?
Review of Steve McConnell's AFTER THE GOLD RUSH
Disk drive behavior
FBA History Question (was: RE: What's the meaning of track overfl ow?)
FBA History Question (was: RE: What's the meaning of track overfl ow?)
3390 capacity theoretically
3390 capacity theoretically
Disk drive behavior
Review of Steve McConnell's AFTER THE GOLD RUSH
Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
Where do the filesystem and RAID system belong?
California DMV
Are the L1 and L2 caches flushed on a page fault ?
Are the L1 and L2 caches flushed on a page fault ?
California DMV
what is interrupt mask register?
future trends in asymmetric cryptography
California DMV
what is interrupt mask register?
what is interrupt mask register?
what is interrupt mask register?
California DMV
how old are you guys

First video terminal?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First video terminal?
Newsgroups: alt.folklore.computers
Date: Mon, 01 Jan 2001 19:06:01 GMT
bhk@dsl.co.uk (Brian {Hamilton Kelly}) writes:
I first saw a Selectric "terminal" at the Business Efficiency Exhibition at Olympia (West London, for the benefit of those that have never been to the Ideal Home Show) in 1964. IBM had one on their stand, which ran 24h/d for the week of the exhib at a rate of 31 ch/s (ordinarily, Selectrics ran at 15.5 ch/s). It suffered no ill effects at all.

about a year ago ... I finally got rid of the tabletop and paperfeed for my 2741. Standard 2741 terminal was something like a frame with a flat surface with the typerwriter body buried in the middle of the top with only a couple inches of "table" on each side (not enuf to hold paper or anything else).

At CSC (4th floor, 545 tech sq, cambridge) had special 1/2in(?) laminated board with a cut-out the size of the typerwriter ... that laid on the top of the 2741 frame & wrapped around the typerwriter body with a couple inches on one side and 18"-24" on the other side and back ... providing enuf room to support paper feed tray in the back and papers to one side of the keyboard (the board could be flipped so the space was either to the left or the right of the keyboard).

The paper-feed tray was like a wide in/out basket but large enuf to hold standard printer fan-fold paper ... bottom tray had room for about 6in stack of paper and return paper then would feed on to the top tray. I started out using standard green-bar fan-fold paper ... nominally reversed so it printed on the white-side.

I haven't had a real 2741 at home in nearly 25 years ... but some how the the table top and paper tray made it into the garage and sat around gathering dust ... managing not to get thrown away even with 3-4 moves over the period (although i still have an "apl" golfball printing element).

random 2741 refs:
http://www.multicians.org/mga.html#2741
http://www.multicians.org/terminals.html
http://www.garlic.com/~lynn/2000f.html#6
http://www.classiccmp.org/mail-archive/classiccmp/1998-05/0875.html
http://web.archive.org/web/20020806100512/http://www.classiccmp.org/mail-archive/classiccmp/1998-05/0875.html
http://www.deadmedia.org/notes/17/170.html
http://www.totse.com/en/hack/understanding_the_internet/excerpt2.html
http://web.archive.org/web/20020225064928/http://www.totse.com/en/hack/understanding_the_internet/excerpt2.html
http://www.geocities.com/siliconvalley/2260/gli_00.html
http://www.enteract.com/~enf/afc/wp
http://web.archive.org/web/20020621232833/http://www.enteract.com/~enf/afc/wp
http://www-mice.cs.ucl.ac.uk/multimedia/misc/tcp_ip/8705.mm.www/0110.html
http://www.unb.ca/web/transpo/mynet/mtx20.htm

some photos
http://www.cns.ufl.edu/info-services/history/nhp75A.jpg
http://www.keysan.com/ksu0675.htm
http://www.ibmtypewriters.com/reconselectric.htm

following gives description of the CCN computing facility at UCLA in '71 (360/91kk, i/o configuriaton, 10 dail 2741 interfaces, etc)
http://www.rfc-editor.org/rfc/rfc90.txt

note in the following "history of the internet in new brunswick", 110 baud was teletype speed and 134.? baud was the 2741 speed. the reference to "local" office connecting to "VM mainframe in Toronto" was a "HONE" clone.
http://personal.nbnet.nb.ca/laurie/internet.html

random hone refs:
http://www.garlic.com/~lynn/2000f.html#62
http://www.garlic.com/~lynn/2000.html#75
http://www.garlic.com/~lynn/2000f.html#30
http://www.garlic.com/~lynn/2000g.html#14
http://www.garlic.com/~lynn/2000g.html#27

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Mon, 01 Jan 2001 21:40:50 GMT
Iain McClatchie writes:
Just to give some intuition for this stuff, R/S codes usually operate best with an uncorrected BER of around 10^-2 to 10^-3, if you're shooting for corrected BERs of around 10^-18. Cell phones work at much higher corrected BERs, and so their uncorrected BERs are more like 10^-1.

when we were playing with cyclotomics in the mid-80s on communication stuff ... we were looking at 15/16ths R/S giving six order improvement on channel nominal 10^-9 BER for effective 10^-15 BER. cyclotomics were one of the few companies deliverying R/S products ... and I believe they mentioned playing major role in R/S for the cdrom standard. shortly after that cyclotomics was bought by Kodak.

they also had a nifty scheme for communication channel with 15/16ths R/S and for uncorrected blocks ... (selective) transmit the 1/2 rate Viterbi instead of the original block (i.e. both the original block and the Viterbi 1/2 rate block could have uncorrected errors and still resolve correct data). there was also some dynamic adaptive stuff that if uncorrected block rate got too high ... switch to transmitting 1/2 rate viterbi along with the original block (within the 15/16s R/S channel).

the problem at the time was finding circuits that would do R/S for more than mbit speeds.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

A new "Remember when?" period happening right now

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A new "Remember when?" period happening right now
Newsgroups: alt.folklore.computers
Date: Mon, 01 Jan 2001 22:09:51 GMT
re:
http://www.garlic.com/~lynn/2000g.html#46

happened to run across apl\360 user's manual, ibm thomas j. watson research center.


the body of the manual was edited and composed on apl\360 using
variations of a text-processing package designed and implemented by
M. M. Zryl (IBM Research), and patterned after a Fortran system due to
A. P. Mullery (IBM Research).

August 1968

APL\360: User's Manual

A.D. Falkoff
K.E. Iverson

Acknowledgements

The APL language was first defined by K.E. Iverson in A Programming
Language (Wiley, 1962) and has been developed in collaboration with
A.D. Falkoff. The APL\360 Terminal System was designed with the
additional collaboration of L.M. Breed, who with M.D. Moore*1, also
designed the S/360 implementation. The system was programmed for S/360
by Breed, Moore, and R.H. Lathwell, with continuing assistance from
L.J. Woodrum*2, and contributions by C.H. Brenner, H.A. Driscoll*3,
and S.E. Krueger*4. The present impelemntation also benefitted from
experience with an earlier version, designed and programmed for the
IBM 7090 by Breed and P.S. Abrams*5.

The development of the system has also profitted from ideas
contributed by many other users and colleagues, notably
E.E. McDonnell, who suggested the notation for the signum and cicular
functions.

In the preparation of the present manual, the authors are indebted to
L.M. Breed for many discussions and suggestions; to R.H. Lathwell,
E.E. McDonnell, and J.G. Arnold*5 for critical reading of successive
drafts; and to Mrs. G.K. Sedlmayer and Miss Valerie Gilbert for
superior clearical assistance.

A special acknowledgement is due to John L. Lawrence, who provided
important support and encouragement during the early development of
APL implemenation, and who pioneered the application of APL in
computer-related instructions.

1 I.P. Sharp Associates, Torotno, Canada
2 General Systems Architecture, IBM Corporation, Poughkeepsie, N.Y.
3 Science Research Asscoiates, Chicago, Illionis
4 Computer Science Department, Stanford University
5 Industry Development, IBM Corp, White Plains, NY.

=======================

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

First video terminal?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First video terminal?
Newsgroups: alt.folklore.computers
Date: Tue, 02 Jan 2001 02:23:55 GMT
"John Bowen" writes:
John, the Selectric was not too unreliable when used as a typewriter. But when you placed the early versions on a computer and really pounded on it none could last a week without breaking the tilt and rotate tapes or having mechanical problems. You would have a $20 million dollar 7030 Stretch or a ANFSQ-32 (SAGE II) sitting idle waiting for the Customer Engineer to make a repair that often lasted several hours. In the mid 1960s the version of the selectric used on big mainframes was a pretty poor choice. I was a glorified Customer Engineer (Senior Specialist) until 1971 and we all hated the machines compared to electronic equipment. I visited an FAA air traffic control site in 1986 and was surprised to see Selectrics still in use by them. They had a room full of broken ones. I purchased a small insurance company in 1983 and one of the first things I did was replace every Selectric with a PC and a matrix printer. That change and a little software quadrupled the productivity of most of the clerical people.

while the standard 360 console, 1052-7, may have been somewhat more rugged than early selectrics ... it still had the golfball tilt/rotate. I know that CSC kept a spare 1052-7 for the 360/67 ... rather than repairing ... swap the hardware and repair the broken one offline.

It wasn't all because of the tilt/rotate ... there were some people that would slam their fist into the keyboard for one reason or another (especially if you had dedicated machine time over the weekend and had been up for 40+ hrs straight and there was a problem where the solution was difficult coming).

once cp/67 was up and running enuf for production, it was less of a problem since CP supported logging on as the operator from the machine console or any available 2741 (some security was added to limit the definition of "any").

The 1052-7 had another "nasty" feature that periodically caught people. There was a small finger that sensed whether there was incoming paper to the carriage. Frequently there were two boxes behind the 1052-7, one to feed the 1052-7 carriage and the other was for output after it had been printed. The input/feed was underneath the output/printed and frequently couldn't be seen.

More than once, the "finger" would sense that it had reached the end of paper and signal intervention required to the CPU. However there were no lights indicating the problem ... and OS/360 when it got an intervention required from the 1052-7 would ring the bell and stop all operations. There were numerous times where it took between 30minutes to 3hrs for somebody to realize that the reason that the system had apparently died was because the 1052-7 had run out of paper (since the input feed wasn't easily visible).

In the early '80s, I wanted one of those Field Engineering briefcases which were actually fancy toolboxs. The first couple times I requested one, it was rejected because I wasn't in IBM field engineering. Finally, I found a generic part number for the briefcase where it could be ordered w/o having to ask IBM. It still came with a number of specialized tools that I had seen used for repairing 2741s, selectrics, 1052s, etc.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Sv: First video terminal?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sv: First video terminal?
Newsgroups: alt.folklore.computers
Date: Tue, 02 Jan 2001 17:00:07 GMT
"Nico de Jong" writes:
Hm., maybe I'm getting senile, but what was the 2701? Wasnt the 1442 a card-read punch? Should anyone have a more-or-less complete listing of the model numbers used by IBM?

2701, 2702, & 2703 were "telecommunication control units".

2702 supported 16 (maybe up to 32?) low-speed lines

2703 was similar to 2702 but supported up to 128 lines.

2701 supported only a few lines ... but had an RPQ that allowed supporting T1 (1.5mbite/2mbit) lines. It was the only IBM controller that supproted T1 lines until the 90s.

Federal System Division got a Zirpel T1 RPQ card for the S/1 in the mid-80s that saw limited deployment ... and you could get a 3rd party card that directly attached a S/1 to a channel. The internal network had a number of these. However, in the mid to late 80s it was becoming difficult to acquire S/1 boxes to house Zirpel cards (although possibly not as hard as it was to acquire 2701 boxes).

The only other choice was HYPERChannel starting in the early '80s, you could get an A220 adapter for channel attach and 710/715/720 adapters for driving T1/T2 links. My wife and I ran a backbone attached to the internal network with this technology. It was also possible to connect an HYPERChannel LAN to a S/1 via a A400 adapter and get T1 connectivity that way (with or w/o the S/1 having a channel attach card).

There were plans in the late '80s for the 8232 (i.e. PC/AT with channel attach card and LAN cards ... providing mainframe gateway to T/R and enet LANS) that saw development of a PC/AT T1 card, but I don't know of any that were actually sold to customers. This was about the same time as the annual Super Computer conference held in Austin where T3 HYPERChannel adatpers were being demonstrated.

The 3705/3725 communication controller mainstay (during the 70s & 80s) didn't support full-speed T1 ... although there La Gaude may have had one or two test boxes in the late '80s. The 3705/3725 market saw a number of customers using "fat pipe" support to gang 2, 3, and 4 56kbit links into a single simulated trunk; but saw now evidence of customers with more than 4 56kbit links in fat pipe configurations. One of the issues that was possibly overlooked was that T1 tariffs were typically about the same as 5-6 56kbit links. At the time when there were no 3705/3725 mainframe customers with more than 4 56kbit lines in a fat-pipe configuration there were 200 easiliy identified customers using mainframe HYPERChannel T1 support.

While NSFNET1 in the late '80s called for full-speed T1 trunks, it was actually done with multiple PC/RTs each with LAN card and a 440kbit card (although RT had a AT bus and the PC/AT T1 card targeted for the 8232 could have fit on the RT bus) ... with multiple 440kbit circuits multiplexed by an IDNX box through T1 trunks provided by MCI.

random refs:
http://www.garlic.com/~lynn/2000.html#77
http://www.garlic.com/~lynn/93.html#15
http://www.garlic.com/~lynn/93.html#16
http://www.garlic.com/~lynn/93.html#17
http://www.garlic.com/~lynn/94.html#23
http://www.garlic.com/~lynn/96.html#9
http://www.garlic.com/~lynn/96.html#27
http://www.garlic.com/~lynn/96.html#30
http://www.garlic.com/~lynn/99.html#36
http://www.garlic.com/~lynn/99.html#63
http://www.garlic.com/~lynn/99.html#66
http://www.garlic.com/~lynn/99.html#67
http://www.garlic.com/~lynn/99.html#70
http://www.garlic.com/~lynn/2000b.html#38
http://www.garlic.com/~lynn/2000b.html#66
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2000c.html#37
http://www.garlic.com/~lynn/2000c.html#65
http://www.garlic.com/~lynn/2000f.html#6
http://www.garlic.com/~lynn/2000f.html#31

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Sv: First video terminal?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sv: First video terminal?
Newsgroups: alt.folklore.computers
Date: Tue, 02 Jan 2001 23:38:46 GMT
jsaum@world.std.com (Jim Saum) writes:
channel-attach RPQ, non-IBM channel-attach interfaces from several vendors (Interdata, DEC, Austex?), or building your own channel interface to the IBM spec.

there is some documentation that the board that I worked on as an undergraduate was the first such non-IBM connection to an IBM channel. It went into an Interdata/3 that we programmed to be a replacement for the 2702 and would actually support both dynamic speed and terminal recognition. It later evolved into Interdata/4 with multiple Interdata/3s handling the line-scanner functions. Perkin/Elmer bought out Interdata and the rights to what I worked on as undergraduate.

I ran across later versions of the Perkin/Elmer box still in production use as late as 1996 (& some conjecture that the wire-wrap channel attach board had possibly not changed since the original implementation).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk drive behavior

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk drive behavior
Newsgroups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.arch
Date: Wed, 03 Jan 2001 00:39:47 GMT
"Bill Todd" writes:
There doesn't seem to be much excuse for such behavior, but that doesn't mean it's impossible. Could anyone in a position to verify or deny the assertion please either set my mind at ease or validate my paranoia on this matter? And while you're at it, if you happen to know details about the appearance of disconnectable and overlapped operation in IDE drives and what host controller hardware changes might be required to support them (it's obvious that host drivers need changes), I'd appreciate edification there as well.

the original discussion was based on situation where disk logic was retrieving data from the memory bus as it was in process of writing. during a power failure, there might be sufficient power to finish the disk write operation but not sufficient to drive memory. the disk logic, not recognizing the situation continues to believe it is getting valid data from the memory bus which it used to finish the sector/block write and then writes a correct/updated ECC. the actual data it was getting might be all zeros or garbage/undefined.

i know many unix vendors "qualify" drives (mostly SCSI) as having fully transfered data to local memory and guarantee that a sector/block write either doesn't occur or occurs completely & correctly (akin/similar to TPC ACID properties) regardless of power failure scenerios.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM Model Numbers (was: First video terminal?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Model Numbers (was: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Wed, 03 Jan 2001 18:11:27 GMT
Lars Poulsen writes:
I'm working on it ... capturing items as they are mentioned here.
http://www.cmc.com/lars/engineer/comphist/ibm_nos.htm
http://web.archive.org/web/20010609034043/http://www.cmc.com/lars/engineer/comphist/ibm_nos.htm

I would be delighted if someone could mark this up with year of introduction and retirement for each item. -- / Lars Poulsen - http://www.cmc.com/lars - lars@cmc.com 125 South Ontare Road, Santa Barbara, CA 93105 - +1-805-569-5277


another location
http://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

finding object decks with multiple entry points

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: finding object decks with multiple entry points
Newsgroups: bit.listserv.ibm-main
Date: Wed, 03 Jan 2001 18:44:48 GMT
Jay_Carr@AMWAY.COM (H Jay Carr) writes:
Object deck formats are discussed in App G of DFSMS/MVS Program Management, SC26-4916. I am looking at -03 level, page 341. The index is not helpful.

From CMS Program Logic Manual (360D-05-2-005, Oct. 1970), pg. 292

Name - ESD Card Formant

Col             Meaning
1               12-2-9 punch
2-4             ESD
8-10            blank
11-13           Variable field count
13-14           Blank
ESDID           ESDID for first SD, XD, CM, PC, or ER
17-84           variable field, repeated 1 to 3 times
17-24           name
        24              ESD type code
26-28           Address
        29              Alighment for XD, otherwise blank
30-32           Length, LDID, or blank

... pg. 275 ESD Type 0 processing

This routine makes Reference Talbe and ESID Table entries for the card-specified control section

This routine first determines whether a Reference Table (REFTBL) entry has already been established for the csect=specified control sectionn. To do this, the routine links to the REFTBL search routine. The ESD Type 0 Card Routine's subsegment operation depends on whether there already is a REFTBL entry for the control section. If there is such an entry, processing continues with operation 4, below. If there is not, the REFTBL search routine places the name of this control section in REFTBL, and processing continues with operation 2, bleow.

.... pg. 276 ESD type 1

This routine establish a Reference Table entry for the entry point specified on the ESD card, unless such an entry already exists.

... ESD Type 2
This routine makes the proper ESID table entry for the card-specified external name and places that name's assigned address (ORG2) in the reference table relocation factor for that name.

.. pg. 277 ESD Type 4

This routine makes Reference Table and ESIDTAB entries for private code CSECT

... pg. 278 ESD Type 5 & 6

Thie routine makes Reference Table and ESIDTAB entries for common and psuedo-register ESD's

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

tweak ui & right click pop-up window

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: tweak ui & right click pop-up window
Newsgroups: netscape.public.mozilla.general
Date: Wed, 03 Jan 2001 20:02:06 GMT
On all releases of netscape 4.x (and earlier), I can right-click on a url, get a pop-up window, move to "open in new window" selection and click (opening the URL in a new window).

On mozilla 15, 16, 17, 18, 0.6, etc and netscape 6.xs, if I right click on a URL, i get the pop-up window but if i move the mouse at all the pop-up window immediately disappears (no ability to select "open in new window").

I am running tweak ui with mouse settings set to "follow" ... i.e. x-mouse mode ... on nt4sp6 ... which may or may not have anything to do with it.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jan 2001 22:37:58 GMT
seebs@plethora.net (Peter Seebach) writes:
I can't imagine them, but I've seen them discussed.

Fair enough; there must exist "anti-math" programmers.


i was once explaining fair share scheduling design & implementation to a number of operating system developers ... and part way thru the talk got a comment ... "you must have been a math major in college". The funny thing was that i (almost) didn't have to use anything past high school 1st year algebra.

much of the time operating system developers think in terms of state machine logic ... bits being on or off ... and rarely need to resort to math ... maybe increment or decrement and sometimes compare against a value. frequently multiplication & division isn't even necesary.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM 1142 reader/punch (Re: First video terminal?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 1142 reader/punch (Re: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jan 2001 14:23:12 GMT
Nick Spalding writes:
Wasn't the 2540 available as soon as the 360 was?

the university had a 1401 with 2540 & 1403n1 that was used as unit record front end for 709 (card->tape & tape->printer/punch). it got a 360/30 and for awhile it would move the 2540/1403n1 back & forth between the 360/30 controller and the 1401.

the university then started running the 360/30 in 1401 emulator mode ... booting from "MPIO" card deck. My first programming job (summer '66) was to duplicate the 1401 "MPIO" function in 360 assembler. I got to design my own memory manager, task manager, device drivers, interrupt processes, etc.

I don't know how long they had the 2540/1403n1 prior to getting the 360/30 ... my first exposure was about the time the 360/30 came in and the 1401 was still there and they would periodically move the unit record back and forth between the 1401 & 360/30.

I would get a dedicated machine time on the 360/30 ... typically weekends when I could get the machine for 48hrs straight. One of the first things I learned was to not start until i had cleaned the tape drives and took the 2540 apart and cleaned everything. If I was diligent about cleaning every 8hrs or so (of use), things ran a lot smoother.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Small IBM shops

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Small IBM shops
Newsgroups: bit.listserv.ibm-main
Date: Fri, 05 Jan 2001 15:03:21 GMT
panselm@EARTHLINK.NET (Phil Anselm) writes:
I've been watching this thread. IMHO, one of the main advantages of the ZOS is the I/O intensity that the hardware will support. If you only have one or two large databases, then there is nothing wrong with the INTEL with a database server on it. If your databases are very large, or if you have multiple large databases that have to be updated concurrently, the INTEL architecture just can't support the I/O rates necessary. The SCSI interface is fast and cheap, but consider this scenario. You have ten large databases (>6G), some with secondary indices. You have many incoming transactions, which update many of the databases concurrently. All those updates must use the same SCSI I/O interrupts on the INTEL motherboard. Of course, you could put in a second or third server, and move some databases, but then your network traffic goes through the ceiling, as the servers must agree on commit points for transactions. While the network-based commits are taking place, the database records are locked (commit pending), so other users may experience degradation.

one of the things that ibm hursley did for the 9333 was to take the scsi command set/protocol and drive it as asynchronous packets over serial copper. over the years many of the scsi issues like command decode overhead and synchronous protocols ... are many of the same issues that have periodically cropped up in mainframe controllers ... like 3880 and transition to escon. the 9333 protocol was instantiated as a standard as SSA. 9333/SSA numbers looked a lot more linear with increase in concurrent activity compared to SCSI ... although there since been a lot of work on high-end fast SCSI controllers to minimize command processing latency and possibly some of the escon type tricks to mask synchronous latency.

random ref:
http://www.garlic.com/~lynn/95.html#13

the interesting thing is that a lot of the CKD stuff now is being emulated by controllers using drives that are effectively same hardware as being used in high-end scsi drives.

however, by definition a small shop would tend to have smaller requirements ... and if it wasn't a small shop and did have database and thoughput requirements that would saturate a standard Intel motherboard ... then it wouldn't likely be considered a small shop (entry-level hardware with less processing & I/O capacity for small shops ... is not likely, by definition, to also be a high-end supercomputer capable of sustaining terabytes/sec thruput).

on the other hand ... numa/q has some interesting scalling characteristics both in terms of i/o and processing power (even if only considered as an entry-level enterprise server).

misc. ref:
http://www.sequent.com/hardware/numaq2000/

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jan 2001 15:59:10 GMT
jmfbahciv writes:
Every operating system developer that I knew had to be able to isolate a problem, analyze the cause(s) and then be able to modify the code so it didn't break anything else. It's the approach to problem solutions that are similar.

in case of fair share scheduling ... it changed the scheduling paradigm of a very deterministic state machine that ran at the microscopic microsecond to microsecond level ... to a statistical, probablistic solution that ran at the tens of millisecond level.

the paradigm change also cut the scheduling function pathlength by two orders of magnitude and reduced aggregate cpu consumption by 20% in high activity environments and delivered a significantly more uniform response for interactive and batch workloads (and in some cases, reduction of a factor of 10 in the average trivial interactive response while still running the processor at 100% saturation).

what i found was that in some cases, i could sometimes get between one order and two order of magnitude performance improvement in pathlenth by careful state analysis and code re-ordering (low level device drivers, interrupt handlers, etc).

however, for resource scheduling and various algorithms (page replacement, disk arm motion, etc), I found that it was useful to do a paradigm shift and solve the problem using a statistical, probablistic approach. Sometimes I found that individuals used to deterministic state machine solutions had difficulty making the paradigm switch and dealing with not quite so deterministic, probablistic solutions (especially when the paradigms were interwoven threads in the same module and set of instructions).

Simple example was when I initially added multiprocessor support to a kernel. i arrainged it so that dispatching function fell out pretty much as a probablistic event which bothered a lot of the more traditional operating system implementers .. although it had the characteristic that the thruput increase was almost exactly proportional to the increase in the available hardware, i.e. a uniprocessor ran at "1", going from a single processor to a dual processor decreased machine cycle time by 10% to handle hardware cross-cashe consistency protocols ... resulting in a two processor configuration having being rated at 1.8 a uniprocessor. Average overall performance improvement was 1.8 although there was some interesting situation where thruput increase was 2.5 single processor (because some interesting probablistic side-effects of maintaining cache locality).

In any case, the code was eventually enhanced to be deterministic which required a lot of cross processor signaling and lock spins. The result was that cross processor signaling and lock spin overhead went from nearly zero percent of total cpu utilization to over 10% of total cpu utilization and most of the interesting cache locality side effects disappeared, but heh, it was very deterministic.

problem determination skills for deterministic solutions work well when dealing with a stricly deterministic sequences ... but frequently fails when the events aren't strictly deterministic sequence ... for instance program failure in a low-level device driver vis-a-vis poor system performance.

another simple example was there were numerous instances where somebody more trained in traiditional deterministic solutions would modify a module with some of my problistic threads ... and while the code wouldn't fail, the overall system performance would get worse for apparently unexplicable reasons.

Another example is I believe there was a paper in the late '60s describing working set and page management algorithms (possibly slightly related to dartmouth?). At about the same time, I did something that was significantly less deterministic and much more probablistic. This caused something of a festouche in some qtrs. (even close to 15 years later). In any case, there was an implementation as described in the research literature using the same operating system and same hardware ... and then there was my solution. I was able to demonstrate better level of interactive performance and workload thruput... using the same workload on the same operating system and the same processor hardware ... but with more than twice as many users (running the workload) and 1/3 less real storage.

random refs:
http://www.garlic.com/~lynn/93.html#0
http://www.garlic.com/~lynn/93.html#4
http://www.garlic.com/~lynn/93.html#5

on the other end ... I once helped out the disk engineering & development operation. they had an environment with disks & controllers under development and test that frequently weren't working to spec. A traditional mainframe operating system in that environment tended to crash & burn within 10-15 minutes. I redid the I/O supervisor so it was absolutely bullet-proof and wouldn't crash ... so that instead of testing a single unit at a time, they could be testing 6-12 different units concurrently using the same processor. This pretty much involved a very deterministic state machine solution (although there were a little bit of probablistic things having to do with hot interrupts which needed to be fenced/masked).

random refs:
http://www.garlic.com/~lynn/98.html#57
http://www.garlic.com/~lynn/99.html#31

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM Model Numbers (was: First video terminal?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Model Numbers (was: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jan 2001 21:58:58 GMT
jsaum@world.std.com (Jim Saum) writes:
In article <3a55e304_1@news.cybertours.com>, "John Coelho" wrote: >Agreed - but why do I remember 12-2-9 as significant? "REP" cards in object
>decks?

The object decks produced by assemblers and compilers in all the standard 360 OSes had (and still have) X'02' (hex 02) in column 1, followed by a 3-character EBCDIC type descriptor (TXT, ESD, RLD, SYM, END) in columns 2-4. The EBCDIC card code for X'02' is 12-2-9.

- Jim Saum


attach recent response i posted in bit.listserv.ibm-main asking about object deck formats (specifically about ESD and entries) ... other 12-2-9 object deck cards:

SLC
ICS
TXT
REP
RLD
END
LDT

from Control Program-67/Cambridge Monitor System User's Guide, GH20-0859-0,pgs 521-523


SLC definition

col     meaning
1       12-2-9 punch  ... ldentifies card as acceptable to the loader
2-4     SLC
5-6     blank
7-12    hexadecimal address to be added to the value of the sumbol, if
any, in columns 17-22. must be right-justified in these columns
with unused leading columns filled with zeros
13-16   blank
17-22   symbolic name whose assigned location is used by the loader
        must be left-justified in these columns. if blank, the address
in the abolsute field is used
23      blank
24-72   may be used for comments or left blank
73-80   not used by the loader, the user may leave these blank or insert
program identification for his own conveniences

ICS

col     meaning
1       load card identification, 12-2-9 punch
2-4     ICS
5-16    blank
17-22   control sectionname, left justified
23      blank
24      , (comma)
25-28   hexadecimal length in bytes of the control section
29      blank
30-72   may be used for comments
73-80   not used by the loader

REP

col     meaning
1       load card identification, 12-2-9 punch
2-4     REP
5-6     blank
7-12    hexadecimal starting address of the area to be replaces as assigned
        by the assembler. must be right-justified with leading zeros
13-14   blank
15-16   ESID -- external symbol identification, the hexadeecimal number assigned
to the control section in which replacement is to be made. the
LISTING file produced by the compiler indicates this number
17-70   a maximumm of 11 four-digit hexadecimal fields, separated by commas
        each replacing one previously loaded halfword
71-72   blank
73-80   not used by the loader

--------- bit.listserv.ibm-main response ---------------

From CMS Program Logic Manual (360D-05-2-005, Oct. 1970), pg. 292

Name - ESD Card Formant

Col             Meaning
1               12-2-9 punch
2-4             ESD
8-10            blank
11-13           Variable field count
13-14           Blank
ESDID           ESDID for first SD, XD, CM, PC, or ER
17-84           variable field, repeated 1 to 3 times
        17-24           name
24              ESD type code
26-28           Address
29              Alighment for XD, otherwise blank
        30-32           Length, LDID, or blank

... pg. 275 ESD Type 0 processing

This routine makes Reference Talbe and ESID Table entries for the card-specified control section

This routine first determines whether a Reference Table (REFTBL) entry has already been established for the csect=specified control sectionn. To do this, the routine links to the REFTBL search routine. The ESD Type 0 Card Routine's subsegment operation depends on whether there already is a REFTBL entry for the control section. If there is such an entry, processing continues with operation 4, below. If there is not, the REFTBL search routine places the name of this control section in REFTBL, and processing continues with operation 2, bleow.

.... pg. 276 ESD type 1

This routine establish a Reference Table entry for the entry point specified on the ESD card, unless such an entry already exists.

... ESD Type 2

This routine makes the proper ESID table entry for the card-specified external name and places that name's assigned address (ORG2) in the reference table relocation factor for that name.

.. pg. 277 ESD Type 4

This routine makes Reference Table and ESIDTAB entries for private code CSECT

... pg. 278 ESD Type 5 &amp; 6

Thie routine makes Reference Table and ESIDTAB entries for common and psuedo-register ESD's
< --
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM Model Numbers (was: First video terminal?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Model Numbers (was: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Sat, 06 Jan 2001 01:43:10 GMT
Anne & Lynn Wheeler writes:
from Control Program-67/Cambridge Monitor System User's Guide, GH20-0859-0,pgs 521-523

Since I have it out (control program-67/cambridge monitor system user's guide, gh20-0859-0, pgs 17-26). I haven't reproduced the associated keyboard layout figures ... however, a modern PC keyboard looks pretty similar to a standard Selectric 2741 configuration (as opposed to a PTTC/EBCD 2741 configuration). Biggest difference is the < & > above the comma/period and the bracket/brace keys to the right of the P key.

from terminal characteristics
2741 Characteristics

The IBM 2741 Communication Terminal consists of an IBM Selectric typewriter mounted on a typerwriter stand. The stand includes the electronic controles needed for communications, a cabinet for mounting a data-phone, a rack for mounting a roll of paper, and a working surface. For use with the CP/CMS system, the 2741 should be equipped with the Transmit Interrupt and the Receive Interrupt features.

The 2741 has two modes of operation: communicate mode and local mode. The mode of the terminals is contrlled by the terminal mode switch, ahich is located on the left side of the typerwriter stand. When in local mode, the terminal is disconnected from the computer. It then functions as a typerwriter only, and no information is transmitted or received. When in communicate mode, the terminal may be connected to the communication line to the computer. The power switch on the right side of the keyboard must be set to ON before the terminal can operate in either communicate or local mode. The procedure for establishing connections with the computer and the terminal switch settings which should be used are discussed below under "2741 Initiation Producedures".

Either of two 2741 keyboard configurations may be used in accessing the CP/CMS system. These are the PTTC/EBCD configurations (shown in Figure 1) and the standard Selectric configuration (shown in Figure 2). On either keyboard, the alphameric and special character keys, the space bar, power switch, the SHIFT, LOCK, TAB, tab CLR SET, and MAR REL keys all operate in the same way as standard Selectric typewriter keys.

On most 2741 terminals, the space bar, backspace, and hyphen/underline keys have the typamatic feature. If one of these keys is operated normally, the corresponding function occurs only once. If the key is pressed nad held, the function is repeated until the key is released. The RETURN and AATN keys have special significance on the 2741 keyboard.

The RETURN key is hit to signal the termination of each input line. When RETURN is hit, control is transferred to the system, and the keyboard is locked until the system is ready to accept another input line.

The ATTN key is used to generate an attention interrupt. It may be hit at any time (since it is never locked out) and causes the keyboard to be unlocked to accept an input line. Refer to "Attention Interupt" for a discussion of the transfer between enviornments that occurs when the attention interupt is generate.

The 2741 paper controls (such as the paper release lever, line-space lever, impression control lever, etc.) are identical to the corresponding controls on an IBM Selectric typerwriter and operate accordingly.

An invalid output character (one which cannot be typed by the terminal and for which no keyboard function, such as tab or carriage return, exists) appears in terminal output as a space. For a further discussion of 2741 characteristics refer to the 2741 component description manual (GA24-3415).

1050 Characteristics

The IBM 1050 terminal is composed of the 1051 Control Unit and a 1052 Printer-Keyboard. The 1051 Control Unit includes the power supplies, printer code translator, data channel, and control circuitry needed for 1050 operation. To be used with the CP/CMS system, the 1051 should be equipped with the Time-Out Suppression and the Transmit Interrupt and Receive Interrupt special features. The 1052 keyboard is similar in appearance to the standard IBM typewriter keyboard. Figure 3 and 4 illustrate the 1050 switch panel and keyboard. The alphameric and special character kyes, the space bar, LOCK, SHIFT, and TAB keys, and the paper controls operate in the same way as those on a standard IBM typewriter. The following keys are of special significance on the 1052 keyboard:

RETURN. If the Automatic EOB special feature is included on the terminal being used, and if the EOB switch on the switch panel is set to AUTO, the RETURN key may be used to terminate an inpute line. Otherwise, (if the Automatic EOB special feature is not availabe on the terminal being used, or if EOB on the switch panel is set to MANUAL) the character transmitted when RETURN is hit is considered part of the input line.

ALTN CODING. This key, when pressed and held while one of the other keys is hit, Originates a single character code such as restore, bypass, reader stop, end of block (EOB), end of address (EOA), prefix, end of transaction (EOT), or cancel. Note that input lines from 1050 terminals not equipped with the automatic EOB special feature must be terminated by pressing the ALTN CODING key and holding it down while hitting the 5 key. This procedure causes a carriage return at the tmerinal.

RESET LINE. Hitting this key (at the left side of the keyboard) causes an attention interupt (provided the terminal is equipped with the Transmit Interrupt special feature). The RESET LINE key may be hit at any time, since it is never locked out, and causes the keyboard to accept on input line. Refer to "Attention Interupt" for a discussion of the transfer between environments which occurs when an attention interrupt is generated.

RESEND. This key and its associated light (both located on the right of the keyboard) are used during block checking. The light comes on when an end-of-block character is sent by the terminal; it is turned of when receipt is acknowledged by the system. If the light remains on, indicating an error, RESENT may be hit to turn off the light, and the previous input line may then be reentered. While the light is on, no input is accepted from the keyboard.

LINE FEED. This key causes the paper to move up one or two lines, according to the setting of the line space lever, wihtout moving the typing element.

DATA CHECK. This key should be hit to turn off the associated light (to its left), which comes on whenever a longitudinal or vertical redundancy checking error occurs, or when power is turned on at the terminal.

1050 Initiation Procuedures

The procedure for preparing the 1050 for use are descriped below. When these steps have been performed, log in.

1. After making sure that the terminal is plugged in, set the panel switches (shown in Figure 3) as follows:

Switch Setting

SYSTEM ATTEND MASTER OFF PRINTER1 SEND REC PRINTER2 REC KEYBOARD SEND READER1 OFF READER2 OFF PUNCH1 OFF PUNCH2 OFF STOP CODE OFF AUTO FILL OFF PUNCH NORMAL SYSTEM PROGRAM EOB see below SYSTEM (up) TEST OFF SINGLE CY OFF RDR STOP OFF

If an EOB switch appears on the terminal, it may be set to either AUTO or MANUL. If it is set to AUTO, the RETURN key may be used to terminate an input line. If the EOB switch is set to MANUAL, or if it does not appear on the terminal, all input lines must be terminated by hitting the 5 key while the ALTN CODING key is pressed and held down.

TYPE 33 TELETYPE Characteristics

The KSR (Keyboard Send/Receive) model of the Teletype Type 33 terminal is supported by CP-67. The Type 33 KSR includes a typewriter keybarod, a control panel, a data-phone, control circuitry for the teletype, and roll paper. The Type 33 KSR keyboard contains all standard characters in the conventional arrangement, as well as a number of special symbols. All alphabetic characters are capitals. The SHIFT key is used only for typing the "uppershift" special characters. The CTRL key (Control key) is used in conjunction with other keys to perform special functions. Neither the SHIFT nor CTRL key is self-locking; each must be depressed when used.

In addition to the standard keys, the keyboard contains several non-printing keys with special functions. These function keys are as follows:

LINE FEED generates a line-feed character and moves the paper up one line without moving the printing mechanism. When the terminal is used offline, the LINE FEED key should be depressed after each line of typing to avoid overprinting of the next line.

RETURN is the carriage return key and signifies the physical end of the input line.

REPT repeats the action of any key depressed.

BREAK generates an attention interupt and interrupts program execution. After breaking program execution, the BRK-RLS button must be depressed to unlock the keyboard.

CNTRL is used in conjunction with other keys to perform special functions. The tab character (Control-I) acts like the tab key on the 2741. COntrol-H acts like the backspace key on the 2741. Control-Q and Control-E produce an attention interrupt like BREAK if the teletype is in input mode. Control-S (X-OFF) and Control-M act as RETURN. Control-D (EOT) should not be used as it may disconnect the terminal. Control-G (bell), COntrol-R (tape), Control-T (tape), and all other Control characters are legitimate characters even though they have no equivalent by the 2741.

HERE IS and RUBOUT are ignored by CP-67.

ESC (ALT MODE on some units) is not used by CP-67, but generates a legal character.

The control panel to the right of the keyboard contains six buttons below the telephone dial, and two lights, a button, and the NORMAL-RESTORE knob above the dial. THe buttons and lights are as bollows:

ORIG (ORIGINATE). This button obtains a dial tone before dialing. The volume control on the loadspeaker (under the keybarod shelf to the right) should be turned up such that the dial tone is audible. After connection with the computer has been made, the volume can be lowered.

CRL (CLEAR). This button, when depressed, turns off the typerwriter.

ANS (ANSWER). This button is not used by CP-67.

TST (TEST). This button is used for testing purposes only.

LCL (Local). This button turns on the typerwriter for local or offline use.

BUZ-RLS (Buzzer-Release). This button turns off the buzzer that warns of low paper supply. The light in the BUZ-RLS button remains on until the paper has been replenished.

BRK-RLS (Break-Release). This button unlocks the keyboard after program execution has been interrupted by the BREAK key.

REST. This light is not used by CP-67.

NORMAL-RESTORE. This knob is set to NORMAL, except to change the ribbon, in which case the knob is twisted to the OUT-OF-SERV light. The knob is then set to RESTORE and returned to NORMAL when the operation has been completed.

OUT-OF-SERV (Out of Service). This light goes on when the NORMAL-RESTORE knob is pointed to it for ribbon changing.

Most teletype units have a loadspeaker and a volume control knob (VOL) located under the keyboard shelf. The knob is turned clockwise to increase the volume.

TYPE 35 TELETYPE Characteristics

THe KSR (Keyboard Send/Receive) model of the Teletype Type 35 terminal is supported by CP-67. The Type 35 KSR, like the Type 33 KSR, includes a typewriter keybarod, a control panel, a data-phone, control circuitry, as well as roll paper. The Type 35 has basically the same features as the Type 33. The additional features of a Type 35 are the following:

LOC-LF (Local/Line Feed). This button operates as the LINE FEED button without generating a line-feed character. It is used along with the LOC-CR.

LOC-CR (Local/Carriage Return). This button returns the carrier as RETURN does without generating an end-of-line character. LOC-CR is normally used only to continue a line of input to the next line.

LOC-BSP (Logical/Backspace). This button generates a character but it has no meaning with the KSR model.

BREAK. This button generates an attention interrupt and interrupts program execution. After execution has been interrupted, BRK-RLS, and then the K buttons must be depressed to unlock the keyboard. K (Keyboard). This button unlocks the keybarod and sets the terminal for page copy only.


Sv: IBM 1142 reader/punch (Re: First video terminal?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sv: IBM 1142 reader/punch (Re: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Sat, 06 Jan 2001 17:22:10 GMT
"Nico de Jong" writes:
In my memory, the 2540 was a multifunction cardmachine, with 5 output drawers. About 1000 cpm reading, 200 punchung. The drawers 1-2-3 could be used for the punched chards, 3.-4-5 for the read cards. Yes, I _DID_ see once that two jobs trried to use drawer 3 at the same time. One for punched cards, and one for read cards. Jee what a mess it was! After that, all punching was banned when _any_ other job was run

i had to do a job for student registration where all input/registration cards were standard manila stock and cards in the punch tray were red-band on the top edge. all input cards were read into the middle hopper and would go thru some validation, cards with problems got flagged with a blank red-stripped card punched behind them.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM 1142 reader/punch (Re: First video terminal?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 1142 reader/punch (Re: First video terminal?)
Newsgroups: alt.folklore.computers
Date: Sat, 06 Jan 2001 18:28:53 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
The 2841 was the original DASD control unit: 2303 drum, 2311 removable disk (all 7 MB of it!), and everybody's favorite source of war stories, the 2321 Data Cell. (I started to put the 2301 drum in this list, but now I'm thinking that it was attached through a 2820 control unit. Can anyone confirm or deny that?)

the controller was a 2820 for the 2301 the 2301 was effectively a 2303 that read/wrote 4 tracks in parallel with data transfer rate 4 2303 (1.2mbytes/sec and probably four times what the 2841 could handle). It needed special dedicated high speed controller and I believe cable length restriction of more like '80 rather than 200' (more typical of standard bus&tag configurations).

from Control Program-67/Cambridg Monitor System Installation guide, GH20-0857-0 (Oct. 1970), pg. 23, sample real I/O source deck


RIOS    RIALIO  TITLE="SAMPLE SYSPLEX"
         SYSRES SYSRES=230,SYSTYPE=2314,SYSVOL=CPDISK1,SYSERR=004,      x
SYSDNC=198,SYSWRM=202
         SYSGEN SYSOPER=OPERATOR,SYSDUMP=CPSYS,SYSERMG=020,             x
SYSCNSL=009,SYSPRT=030,SYSPUN=032,SYSCORE=768K
OPCONSOL DMXDV  RDEVPNT=PRINTER1,RDEVADD=009,TYPE=1052
PRINTER1 DMXDV  RDEVPNT=CARDRD11,RDEVADD=030,TYPE=1403
CARDRDR1 DMXDV  RDEVPNT=PUNCH1,RDEVADD=031,TYPE=2540RDR
PUNCH1   DMXDV  RDEVPNT=TERM01,RDEVADD=032,TYPE=2540PCH
TERM01   DMXDV  RDEVPNT=TERM02,RDEVADD=020,TYPE=2702T,SAD=1
TERM02   DMXDV  RDEVPNT=TERM03,RDEVADD=021,TYPE=2702T,SAD=2
TERM03   DMXDV  RDEVOBT=TERM04,RDEVADD=022,TYPE=2702T,SAD=2
,,,,
TERM4E   DMXDV  RDEVADD=04E,TYPE=2703T,SAD=4
DRUM2    DRDEV  RDEVCU-R2820A,RDEVADD=100,TYPE=2301,DECUPTH=80
R2820A   DRCU   DEVLIST=DRUM2,RCUADD=0,CUTAIL1=CHAIN1,RCUPATH=80
CHAN1    DRCH   RCULIST=R2820A,CHANADD=1,CHANPNT=CHAN2
....

as an aside, SAD command on the 2702/2703 bound the line-scanner to a particular line. Standard installation had 2741 line-scanner at SAD1, TTY line-scanner at SAD2, and 1052 line-scanner at SAD3.

As an undergraduate, I added the TTY support to CP/67 and after looking thru all the manuals believed that I could do dynamic terminal type recognition and not require a preconfigured SAD command for specific lines. Didn't make a lot of difference for directly wired terminals but in dial-up ... it allowed a single phone number rotery pool for all terminals (i.e. all terminals could call the same number)

I tested it all out and it actually worked with TTY terminals working on 2741 dial-up "lines". Then the IBM CE got around to telling me that for the 2702 there was a hardware short-cut, while the type of line-scanner could be dynamically set with the SAD command, specific line frequency oscillators was hardwared to specific lines (i.e. 110 baud terminals weren't spec'ed for running on 134.5 baud lines).

That led to the university project where four of us built a 360 channel connection board & the first 360 plugged compatible 360 controller (initially out of an Interdata/3) to replace the 2702. The Interdata/3 directly implemented the line-scanner function in software ... including initially strobbing the signal raise/lower at a very high rate to determine the edges and dynamically calculate the baud rate.

random refs:
http://www.garlic.com/~lynn/96.html#30
http://www.garlic.com/~lynn/99.html#12
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2000c.html#37
http://www.garlic.com/~lynn/2000g.html#42
http://www.garlic.com/~lynn/2001.html#3

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage
Date: Sat, 06 Jan 2001 19:58:53 GMT
"Jean Dion" writes:
Most Unix and NT use server internal memory disk caching today. This is on of the reason why file sharing amount several severs is so hard to do. Also some file systems informations are kept in memory to increase speed on disks access. All these mecanics works well in a close enviroment but not with several servers who want to access the same file for data sharing. One of the oldest IBM mainframe was capable of data sharing without very special twist (more than 20 years ago). All file I/O request access are kept in the disk subsystem itself.

Some forgotten history on mainframe here...

Another thing that is missing on open-systems is Dynamic Path Reconnect (DPR). This method was develop by IBM on mainframe in 1980 and implemented in MVS/XA in 1982. This allow multi-path disk subsystem to return and I/O by a different path than the one who originally request it. So for exemple if one I/O is done via path A it can reurn the responds by path B or vise-versa. This method could reduce significally channel (scsi or FC) busy. Another avantage was easy error recovery in case of channel error (no downtime cause by bad cable or path). This could be implemented by IBM, HDS, StorageTek, Amdahl and EMC who have this technologies working on mainframe since 1982.

Full Track Buffer (on read) was develop and implemented by StorageTek in 1989 to reduce RPS miss cause by busy channel. Most other disk vendor adopt this method.

The first disk cache was also develop by StorageTek back in 1983. It was called 8890 cybercache...hum could be popular name today! It was the first disk controller to have disk cache on mainframe. All cache was for read only and total hidden from operating systems like the open-systems today.

Disk subsystems functionality on mainframe are totally different than open-systems. Open-systems are not aware of disk caching when it's done by the disk subsystem itself. On mainframe the operating systems is fully aware of the presence of the cache. This give a major avantage. It can be configure to store pieces of disk permanent in cache for read. Frequent files that are read only could get faster read by been present in cache. Even a entire disk could be kept in cache. Many other cache setting are available but I wont go futher...

As you can see, disk on mainframe are much better equip to do I/O than open-systems. BUT they have slower access with ESCON. FICON will fix that this year...(same speed as fibre channel 100MB/sec). Rumor say 800 Meg/sec are around the coner...(2004)

As you can see open-systems can learn a LOT for older operating systems...


ironwood/3880-11 controller had 4k record disk cache sometime in '81 (for 3880 disk drives) .. from the tucson lab.

then came out with ?(memory failing, may have been victoria or something like that)/3880-13 controller track disk cache a year or two later ... again from tucson.

DPR originally had some things that were path specific with respect to reserve and errors. I did a revamp of the implementation that used much shorter pathlength & memory in the controller that eliminated the problem and also made it much simpler to virtualize (in general KISS) ... at the time it wasn't adopted (early '83), I have no idea if anything like it was subsequently put in place. In any case, for the original DPR some of the recovery scenerios almost impossibly complex in multi-complex configurations (and needed involve operator intervention in some cases).

One of ESCON problems has been simulating bus&tag half-duplex synchronous operation which runs into latency problems at more than very trivial distances (until you get to aggregation of transfer thruput, most of the fiber is significantly faster than the disk requirements and the bottlenecks become command execution latency ... both processor & drive as well as transmission, & in half-duplex round-trip delays). It was one of the things in SSA and over 10 years ago gave 9333s significantly higher thruput compared to standard SCSI (even given effectively same drive mechanics). 9333 was initially done on 80mbit/sec serial computer and used asynchronous packetized SCSI commands and data so there was much less of end-to-end latency issues.

One of the announcements that goes along with FICON is better command queueing at the processor ... something we worked on over ten years ago (a couple of us even did a invention disclosier on it at the time).

random refs:
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/2000b.html#38
http://www.garlic.com/~lynn/2000d.html#13
http://www.garlic.com/~lynn/2001.html#12

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage
Date: Sat, 06 Jan 2001 20:01:38 GMT
"Rob Peglar" writes:
Jean is right. There is little new under the sun. However, CDC was doing multi-pathing, full-track disk reads, disk caching (in ECS, extended core storage) and other innovative I/O techniques on the Cyber line back in the early-to-mid '70s. The model of a peripheral processor, to manage channels and off-load the CPU, was conceived in the late '50s and implemented by Cray and Jim Thornton in the early '60s. COS. For some history, read Thornton's classic book, 'The Design of a Computer System', dated 1970.

as a random aside, thornton left (with a couple other engineers) and formed NSC & HYPERChannel product line.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage
Date: Sat, 06 Jan 2001 22:10:41 GMT
Anne & Lynn Wheeler writes:
as a random aside, thornton left (with a couple other engineers) and formed NSC & HYPERChannel product line.

most of the SANs i knew about in the '80s were implemented with network systems gear. I was going thru some old boxes recently and ran across a paper by Gary Christensen (one of the young? engineers that left with thornton) dated 1986 and titled Leading Edge Data Networks, An Overview. I believe Gary was also the person that initially coined the term "Information Utility" to descripe the idea of extending information access to the WAN environment.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage
Date: Sun, 07 Jan 2001 03:25:09 GMT
"Rob Peglar" writes:
network that allowed disparate hosts to trade information. It can also be argued that the first SAN was implemented with HYPERchannel adapters and extenders, although that argument is somewhat weak on a definitional basis.

NCAR (and tried to market as Mesa Archival) use a ibm 4341 a storage controller which managed both tapes, disks, robots ... for SAN? storage access. disks used A515 remote device adapters ... and crays and other machines on the "network" could execute direct transfer from the disks to their machines (once the A515 had programmed with the correct permissions).

LANL, LLNL, UCSD-SCC/GA, NASA/AMES, misc others had done similar things.

the hippi switch & IPI-3 disk controller standard meetins had requirements so that equipment could be configured to operate similarly to the NCAR configuration (i.e. 3rd party transfers, permissions established by the "server" ... but allowed direct data transfer w/o having to flow thru the server).

gary c. was one of the prime backers behind the national storage lab effort.

random refs:
http://whatis.techtarget.com/WhatIs_Definition_Page/0,4152,212937,00.html
http://web.archive.org/web/20010331072346/http://whatis.techtarget.com/WhatIs_Definition_Page/0,4152,212937,00.html
http://www.storagesolutions.com/topic01.htm
http://www.nersc.gov/~jed/talks/net-tutorial/sld085.htm
http://www.llnl.gov/liv_comp/siof.html
http://web.archive.org/web/20010615025517/http://www.llnl.gov/liv_comp/siof.html
http://www.garlic.com/~lynn/2000f.html#30
http://www.garlic.com/~lynn/2000c.html#65
http://www.garlic.com/~lynn/2000b.html#29
http://www.garlic.com/~lynn/2001.html#4
http://www.garlic.com/~lynn/2000.html#90

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage
Date: Sun, 07 Jan 2001 06:50:08 GMT
Anne & Lynn Wheeler writes:
NCAR (and tried to market as Mesa Archival) use a ibm 4341 a storage controller which managed both tapes, disks, robots ... for SAN? storage access. disks used A515 remote device adapters ... and crays and other machines on the "network" could execute direct transfer from the disks to their machines (once the A515 had programmed with the correct permissions).

HYPERChannel A510 & A515 remote device adapters were HYPERChannel network nodes that emulated an IBM bus&tag channel. Normal IBM device control units could be cabled to a A51x remote device adapter as if they were real IBM mainframe channels ... and then the connected devices accessed from machines connected to the HYPERChannel network.

I developed and help deploy such a oolution for the IBM IMS development and support teams in 80/81 (remoted numerous nominally "channel attached" devices supported 300+ people to a remote location several miles away from the data center). One installation was about 300 people for the IMS development team at the STL lab. The other was another 300 or so IMS field support people at the boulder complex.

random ref:
http://www.garlic.com/~lynn/2000c.html#65

Most things worked fine except for IBM CKD disk controllers and drives. They had special timing/latency constraints for some specific operations. The A515 was an enhanced remote device adapter that providing some extra features to overcome the CKD disk timing/latency constraints. This allowed NCAR (and others) to directly attach IBM disk farms to HYPERChannel networks and make them available to the HYPERChannel network connected processors ... things like crays and/or other processors even though they weren't IBM mainframes.

There were some little gotchas like ibm field engineering expected to provide service and support for such hardware when they were directly connected to ibm mainframe channels.

For the non-SAN (i.e. like at NCAR) implementational that just used HYPERChannel as a mainframe "channel extender" (simulating local ibm mainframe operation even though the actual processor and the devices separated by long distances), I had a problem crop up over 10 years after I did the stuff for STL/boulder/IMS groups. With HYPERChannel there could be various kinds of network transmission errors and/or congestion problems that wouldn't occur in a normal, locally connected mainframe connection. At the mainframe, I would trap those conditions and reflect a simulated "channel check" operation to the mainframe operating system. The mainframe operating system would do various recording and diagnostics things and then retry the operation.

Ten years after I had done the implementation at STL, I was contacted by some IBM RAS monitoring guy. A new mainframe processor had been out for a year and they were reviewing the error reporting summaries for all customer installations for the year. They were expecting an aggregate total of 3-4 channel checks to have been recorded across all machines at customer shops (not per machine per month, or per machine per year, but across all machines for the year). However, there were something like 15-16 aggregate channel checks recorded. This had a lot of people concerned regarding the predicted RAS qualities of the new processors. The finally tracked it down to installations running remote device HYPERChannel support. The solution was to change the HYPERChannel software to simulate an "interface control check" rather than a "channel check". It turned out that "interface control check" followed pretty much the same error retry flow as "channel check" handling ... so the operation would be retried the same way whether "interface control check" or "channel check" was reflected.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

stupid user stories

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Sun, 07 Jan 2001 16:32:02 GMT
jmfbahciv writes:
I am amazed. I would have never thought of doing a catsup test. Do people do similar things with mail boxes?

mail boxes, when a small plastic packet of katsup (or mustard or whatever) is inserted, doesn't try to automatically pull it in and flatten it out until it goes pop ... sometimes the point that the packet goes pop is still outside the machine and sometimes it isn't. ATMs with standard envelope feeds (more like a mailbox) have fewer intruder measures ... but a number of things are done to the card insert part (since there are a number of mechanical and electronic parts that would be rendered inoperable).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

stupid user stories

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Sun, 07 Jan 2001 17:05:05 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Actually, the prime source of protection is the video camera. A few years ago, some loser stuffed a lit cigarette into an ATM, and the resulting fire destroyed much of the money and the machine. The video convicted him. I guess they need firewalls too.

the original work predated having video cameras everywhere. having to add simple check to see if the machine was actually deailing with a card eliminated some amount of maintenance ... with or w/o cameras

having video cameras are an inhibitor for large number of attacks, i believe justified as much for attacks on people using the machines as attacks on the machines themselves.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

stupid user stories

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Sun, 07 Jan 2001 17:12:14 GMT
Anne & Lynn Wheeler writes:
having video cameras are an inhibitor for large number of attacks, i believe justified as much for attacks on people using the machines as attacks on the machines themselves.

aka ... litigation liability and child-proofing the environment. at what distance from an ATM machine does an attack on a person have to occur before the owner of the ATM machine is not liable.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk caching and file systems. Disk history...people forget

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk caching and file systems.  Disk history...people forget
Newsgroups: comp.arch.storage,alt.folklore.computers
Date: Sun, 07 Jan 2001 20:59:57 GMT
"Jean Dion" writes:
Most Unix and NT use server internal memory disk caching today. This is on of the reason why file sharing amount several severs is so hard to do. Also some file systems informations are kept in memory to increase speed on disks access. All these mecanics works well in a close enviroment but not with several servers who want to access the same file for data sharing. One of the oldest IBM mainframe was capable of data sharing without very special twist (more than 20 years ago). All file I/O request access are kept in the disk subsystem itself.

IBM disks had twin-tailed disk controllers since the system/360 with different processors having access to the same disks. Early 70s, introduced system/370 and the 3830 disk controller which supported four channel paths and the disk drives had something called "string-switch" which allowed drives to be connected to two different controllers .... a single disk drive could be accessed concurrently by up to eight different processors.

One of the major uses of this was the airline reservation system which would have full-blown configuration. By the mid-70s, the airline res system also got an enhancement for the 3830 supporting fine-grain locking. Prior to that serialization was done with a "reserve" command at the drive level. With the 3830 fine-grain locking support, access to sub-areas could be serialized.

About the same time, some people at Uithoorn (an ibm location running a clone of the branch office HONE system for europe) developed a I/O protocol for IBM drives analogous to the processor compare&swap instruction (without requiring fine-grain locking enhancement in the 3830 ... although it required extra disk revolutions) for processor serialization.

The US HONE used that and other functions in 1978 to build what was the largest single-system image cluster system in the world at the time. It provided online service for all the branch & field service people in the US ... system possibly had between 30,000 and 40,000 user account definitions. Because of concerns regarding things like natural disasters, by 1980 or so, the california location was replicated in both dallas and boulder ... with load-sharing & fall-over capability extended from intra-cluster to across the three locations.
http://www.garlic.com/~lynn/2000g.html#27

The basic filesystem paradigm dates from the origin of the system/360 in the early '60s. At that time there was huge excess I/O capacity and severaly constrained real memory. Some of the trade-offs made to meet that design point (little or no caching) facilitated lack of filesystem inconsistency and ease of concurrent filesystem cluster implementation. However, by at least the mid-70s some of the trade-offs had reversed but the pieces of paradigm were very well entrenced (optimal trade-off of some of the implementations could have changed while still maintaining file system consistency a eash of cluster implementation).
http://www.garlic.com/~lynn/94.html#35
http://www.garlic.com/~lynn/2000f.html#18
http://www.garlic.com/~lynn/2000g.html#51
http://www.garlic.com/~lynn/2001.html#12

However, because of the lack of real memory for things like caching, system thruput was a lot more sensitive to data ordering on disk. During the late '60s, by careful system build with nearly hand ordering of placement of standard system files and data, I could show a 2.5 to 3 times thruput speedup for various jobs (compared to off-the-shelf system build).

see misc. footnotes regarding process used for OS/360 MFT11 and MFT14 in the following
http://www.garlic.com/~lynn/94.html#18

-- Anne & Lynn Wheeler | lynn@garlic.com -
http://www.garlic.com/~lynn/

VM/SP sites that allow free access?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VM/SP sites that allow free access?
Newsgroups: comp.lang.rexx
Date: Sun, 07 Jan 2001 21:13:37 GMT
richgr@panix.com (Rich Greenberg) writes:
And in any case, VM/SP is quite obselete. VM/ESA is current but about to be replaced (on newer processors) with zVM.

as total aside there was VM development project that started in the early 80s called ZM. At least one of the original stated objectives was rewrite in a portable language to allow portability and better modulization. It would have made it much easier to do the FLEX/OS, hercules type thing &/or migrate selected components to non-370 processors (analogous to the native engines of some of the low-end 370s). Unfortunately, a lot of people took the opportunity to throw in their most favorite top ten wish-list features ... which eventually allowed the project to achieve critical mass and collapse into a black hole.

random ref:
http://www.garlic.com/~lynn/94.html#22
http://www.garlic.com/~lynn/99.html#222
http://www.garlic.com/~lynn/2000g.html#40

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Mon, 08 Jan 2001 16:39:27 GMT
Charles Richmond writes:
And probably a few weeks after that, the 50th anniversary of the first disk crash...which is probably why there is little commemoration of the invention of the disk drive. Hey, but those guys in the past could only lose one megabyte or so when the thing crashes...we advanced types can loose upwards of 60 gig in a crash!!! Obviously we of modern times are quite superior!!!

but has the aggregate value of the crashes increased or decreased?

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Mon, 08 Jan 2001 17:50:58 GMT
spinoza9999 writes:
... lots of snip Precisely because the corporation defines transcendental signifiers like "system", the winners in the performance review game become those whose good works are brutally shrewd political moves.

there is the joke about the individuals whos' full time occupation are career managment ... and their work product tend to be production of refined organizational charts (as opposed to the individuals that are ROJ, retired on the job). this goes along with observation, that after a certain level ... heads tend to roll uphill (i.e. people running failed projects get promoted).

during the early '80s, i was indoctrinated by an individual that studied and dealt with large organizational dynamics.

One of his observations was that many of the executives and upper management in US corporations had been US military officiers during world war two. at issue was that the US going into the war lacked a large trained professional army ... they were turning out 90 day wonders (and some cases large groups of cannon fodder). In order to deal with the lack of experience, the military created a very rigid & top heavy management structure with decisions made as high as possible and lower echelons expected to follow tatical directions to the letter with no latitude for deviation.

He contrasted the german military that was 3% officers and a large professional army with the US that had 15% or so officers. The german military (as an organization) tended to expect that the person on the scene would be making tactical decisions (contrasted with the US military in WWII that dictated little or no tactical latitude).

He predicted that US corporations were going to take some time to recover from the ingrained executive management style of these former WWII military officers.

He periodically quoted Guderian directions going into the Blitzkrieg, verbal orders only. The idea was that his men would not have to worry about post-blitzkrieg audit ... looking to place blame because somebody made a local decision that might not be 100% perfect in hindsight. The overall strategic plan was laid out and the person on the spot was expected to actively deal with tactical issues.

A possible corollary is that the only way to never make a mistake is to never do anything.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Tue, 09 Jan 2001 15:24:46 GMT
spinoza9999 writes:
However, the German army was far more willing to take casualties than the American army, and with less industrial resources, the situation by the middle of the war was miserable for the common German soldier and his immediate officers. Their leaders (with the exception of Hitler himself, who was in loony-land and contemptuous of the top officers because of his experiences as a corporal in WWI) treated the men with recognition and respect but could not change the impossible strategic situation. Rommell and Guderian respected their men, but had to ask them to do the impossible.

one of the other points was that the american military tended to win by management of far superior resources ... equipment and men (logistics) one of the points made was sherman vis-a-vis tiger ... the tiger had a 5:1 kill rate over the sherman. US answer was to build ten times as many shermans. the minor downside was while the shermans didn't have much feeling as to being killed 5 times as often as a tiger, some of the people in the shermans might have feelings about it (i believe the term cannon fodder came up).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Tue, 09 Jan 2001 16:53:02 GMT
& a web pointer


http://www.belisarius.com/
http://web.archive.org/web/20010722050327/http://www.belisarius.com/

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Tue, 09 Jan 2001 22:22:52 GMT
don@news.daedalus.co.nz (Don Stokes) writes:
Old drives used to crash at the slightest provocation (and often without any provocation at all, such as the DEC RA81 two year mean time to death) whereas modern drives can take quote a lot of abuse before the heads plow into the oxide.

I haven't had a disk go south around here for years. That's not at all to say it doesn't happen, but it doesn't happen anything like as ofen as it used to.


i don't know the exact figures, but I believe MTBF has increased by a factor of 10 or more (this is even for current run of the mill commodity PC drives against old top-of-the-line mainframe drives).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Tue, 09 Jan 2001 23:28:02 GMT
Greg Pfister writes:
Actually, while a backup probably won't save your person from a thermonuclear weapon dropped in your town, a backup at a remote site will save your data. And regular backups, maintained over time, will allow you to go back to a non-infected system. People who are serious about backups do maintain multiple generations at remote sites. Sometimes even hardened sites. Web-based backup is always (probably) remote, and could even be hardened.

Backups are, in fact, a significant feature of disaster recovery techniques -- and disaster recovery is not the same as high availability. RAID is a HA technique, not a DR technique.


when we were doing cluster & HA stuff in the '80s, I coined the term disaster survivability to describe geographic operation (in contrast to simple disaster recovery).

in the 80s, there was study done by some companies with significant (absolute) dependencies on their data processing operations. A major data processing disaster that had the facility out of commission for a week would bankrupt the company. cost for a duplicate data processing facility would be less than one week's business (in the case of one company, 24hrs profit covered lease for 50story building for a year and all the salaries of people working in the building, plus the data processing center ... and the business totally was dependent on its data processing center).

The geographic distance that the study quoted (for geographic survivability) was somewhere between 40 and 45 miles (significant "problems" very rarely found to span more than a 40 mile diameter). You did have to pay attention to telco center, power feeds & the like.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Tue, 09 Jan 2001 23:46:24 GMT
Eric Sosman writes:
For what it's worth, Brian L. Wong writes in "Configuration and Capacity Planning for Solaris Servers" that disk MTBF in the mid- 1980s was about 75K hours, and had risen to about 800K hours by 1997. I have no idea where he got the data.

i have recollection of 30k-40k hrs from earlier periods ... and the 800k hours sounds in the ballpark. it is possibly one of the reasons that there are mainframe controllers mapping mainframe I/O to hardware parts that are nearly the same as that used in more commodity products. In that respect ... the benefits of RAID/mirroring isn't what it used to be (big difference between no-single-point-of-failure for 40K MTBF parts and 800k hr MTBF parts ... but the cost of a RAID mirror is also significantly less).

As an aside, with year at 365x24=8760hrs, 800K hr MTBF is 100 years. The number now is less of a factor for any particular user/drive and more of a factor in calculating the percentage of drives that will fail under warrenty and have to be replaced.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Tue, 09 Jan 2001 23:56:01 GMT
lindahl@pbm.com (Greg Lindahl) writes:
"Aaron R. Kulkis" writes:

Don't use car batteries...get a "Deep Cycle" Marine battery.

Yes. For the industrial UPS I mention, we replaced all the batteries with deep cycle marine batteries, which are extremely dense and cheap compared to the ones we had before. I had forgotten that they were also 12 volts...


I have a friend that had the battery in his brand new car fail after two weeks. after the third replacement he found an online newgroup that said that it was a common problem for that manufactur and a recommendation to replace with a marine battery.


http://smithae.com/optima.html
http://web.archive.org/web/20021010110155/http://www.smithae.com/optima.html
http://www.optimabatteries.com/main.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Wed, 10 Jan 2001 14:53:39 GMT
Paul Repacholi writes:
And terrain. I can't imagine having you recovery centre 40 miles further dowm ol man river would help a lot. 40 miles east or west would be much better. As long as the telco centre was not in the mud of course.

What do people in CA do?


I think there was study of the possibility of the mt ranier "mud flow" path (which will be much worse than the mt st helens, there are towns in the predicted path that practice evacuation) ... wouldn't want both replicated data centers in the path of that mud.

random ref to a large clustered data center in CA that then replciated in Dallas and Boulder because of concerns related to CA properties
http://www.garlic.com/~lynn/2001.html
http://www.garlic.com/~lynn/2000g.html#14
http://www.garlic.com/~lynn/2000g.html#27

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Wed, 10 Jan 2001 17:53:14 GMT
Eric Sosman writes:
Wong's point was that RAID is more important than formerly. He argues that the the tenfold increase in MTBF went along with an eighteen-fold increase in capacity (from around 500MB to 9GB). Hence, although disk failures became less frequent the consequences of a single failure became more severe, and the severity increased faster than the reliability.

He makes an even stronger case for RAID as a performance booster, pointing out that disk speed increased by only a factor of three or so during the same decade.

-- Eric.Sosman@east.sun.com


however, has the value of the aggregate working data on such a disk increased? or is there just a lot of lower value &/or replicated/cached data.

i started noticing the disk problem in the late 70s. following is some comparisons of the issues from the early 80s (predating some of the common RAID studies) and leading to stripping, arm optimization, load balancing, etc.

random refs:
http://www.garlic.com/~lynn/95.html#8
http://www.garlic.com/~lynn/95.html#9

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Thu, 11 Jan 2001 00:13:20 GMT
Eric Sosman writes:
Wong's point was that RAID is more important than formerly. He argues that the the tenfold increase in MTBF went along with an eighteen-fold increase in capacity (from around 500MB to 9GB). Hence, although disk failures became less frequent the consequences of a single failure became more severe, and the severity increased faster than the reliability.

He makes an even stronger case for RAID as a performance booster, pointing out that disk speed increased by only a factor of three or so during the same decade.


given new zero cost ... RAID may be a brain dead, of course, solution given any cost/benefit analysis.

however, at 800k MTBF for disk drives ... disk drives (and in fact much of other hardware components) hardware failures as starting to become only a minor component of availability statistics for a service.

following is minior reference to large financial server that has 100% availability for over six years
http://www.garlic.com/~lynn/99.html#71

i.e. what percent improvement in system availability does RAID provide ... or is it now such a trivial cost that it is hardly worth trying to analyse the cost/benefit ratio.

from a performance standpoint ... in the early '80s we actually worked on a controller microcode enhancement that would limite data capacity that could be placed under any specific arm ... as an extra charged performance feature (i.e. for shops that didn't have the discipline to do it as a policy; i.e. sell a 3380 as only being able to hold 300mbytes as opposed to full 630mbytes, and charge extra for it).

in the late '80s and early '90s actually did some work on "qualifying" raid drives with regard to availability. there were surprising number that had to be "fixed" because some really anomolous failure mode was overlooked (i.e. given all possible states that the RAID could be in, were all immune if a power failure happened to occur at that particular moment).

some of this I somewhat kicked off in the disk behavior thread running in comp.arch & comp.arch.storage with the following
http://www.garlic.com/~lynn/2000g.html#43
http://www.garlic.com/~lynn/2000g.html#44
http://www.garlic.com/~lynn/2000g.html#47
http://www.garlic.com/~lynn/2001.html#6

a problem that I actually worked on a couple years ago that cost quite of bit money was a clustered configuration with (operating system provded device level) software mirroring using independent controllers and series of independent disks. It was a large production data base on multiple drives. At one point there was a problem in one of the controllers which had to be serviced by the manufactur, which replaced the controller. Part of the administrative process wasn't completed and the serial id for the controller wasn't entered into the operating systems registry. As a result, when the operating system booted, it didn't recognized the controller and issued a cryptic, undeciferable message. Since this was operating system level, device level mirroring, it was masking all errors (like being unable to write to the mirrored copy). Sometime later one of the drives connected to the "good" controller had a problem ... which also happened to be the drive containing the database data dictionary & logs. Because this was a mirrored configuration the operating system continued to do error masking because of the assumption that data was still being written to the mirror ... which it wasn't actually doing because the mirrored controller wasn't in the registry. It took the DBMS another couple hrs to actually fail ... and by that time the disks were in terrible inconsistent state.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Life as a programmer--1960, 1965?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Life as a programmer--1960, 1965?
Newsgroups: alt.folklore.computers
Date: Thu, 11 Jan 2001 16:30:20 GMT
"Mike" writes:
I fondly remember doing "self-modifying code" in assembler. You would dynamically change an operand or address within the program while it was executing. I know -- yuck! But it was still considered a somewhat acceptable practice in 60's, and I even remember the technique being taught in classes. Of course, we didn't have to worry much about reentrant code back in those days. And nothing is more fun than debugging a program that changes while it runs <g>.

it isn't just reentrant code ... just having to support self-modifying code can slow machines down by 30% to sometimes a couple hundred percent. one of the things that risc architectures typically have is a separate i & d cache ... where not only does command decode not have to worry about subsequent modification to the instruction stream but the instruction cache doesn't even worry about memory consistency i.e. a store operation to an instruction address location doesn't check to see if there is a line for that address in the i-cache. For program loaders on these machines, the hardware tends to have some form of i-cache flush/deelete which results in resynching the contents of the cache with storage.

the higher performance 370 machines had quite a time constantly checking the instruction decode pipeline to see if there was a modification done to some instruction (alrady in the pipeline) by an operations further up the pipeline. this tends to still exact a performance penalty either in terms of performance slow-down or lots of extra circuits to constantly check for the possibility.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk drive behavior

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk drive behavior
Newsgroups: comp.arch
Date: Thu, 11 Jan 2001 16:41:06 GMT
"Bill Todd" writes:
Kind of reminiscent of shadow-paging, IIRC. Certainly an improvement over ext2fs's fast-and-loose behavior, but on first glance at least it seems that 'sync' operations might be fairly expensive (in their need to flush out all changed data in the file system rather than just that associated with the relevant file before writing a new metaroot - unless one builds a special additional tree just to service the sync).

this characteristic carries over into clustered/ha database operations where in single cache mode they can support a fast commit operation (i.e. update is written to the log even the the actual record(s) has not been copied from cache back to disk). however in a clustered operation with multi concurrent access to the same data, frequently before the record(s) is allowed to migrate to a different cache (in a different cluster node) it must first be flushed to disk.

However, allowing the record(s) to migrate as addenda to the distributed locking messages tends to introduce a huge amount of complexity because of the log merge problem during recovery (considering all possible failure modes). Fine-grain timer syncronization across all ndoes in the cluster helps ... or some form of monotonically increasing virtual timer.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Thu, 11 Jan 2001 17:07:00 GMT
"Aaron R. Kulkis" writes:
As long as it's not an earthquake or a tsunami.

some number of sites are ruled out just because they are vulnerable ... independent of whether they share same vulnerabilities with a redundant site. i believe study, was in part, to specify minimum distance that vendors needed to support for disaster survivability, redundant configurations.

i thot i remember seeing some reference to half of the federal disaster flood insurance each year goes to the state of mississippi(?). it was in conjunction with observation about each year after the flood (& after previouly having rebuilt on flood plane) ... the individuals claim that they had no idea that there would be flood (and this has been going on for as many years as there has been federal disaster flood insurance).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Thu, 11 Jan 2001 17:22:00 GMT
Anne & Lynn Wheeler writes:
i thot i remember seeing some reference to half of the federal disaster flood insurance each year goes to the state of mississippi(?). it was in conjunction with observation about each year after the flood (& after previouly having rebuilt on flood plane) ... the individuals claim that they had no idea that there would be flood (and this has been going on for as many years as there has been federal disaster flood insurance).

can you imagine somebody rebuilding a hardened data center on a flood plain that gets flooded out every year? Typically a single instance of a hardened data center would preclude lots of environmental factors and then look for a site that is served by two different power grid/substations, two different water mains and 2-4 different telco central offices ... and the feeds for these services all enter the building from different sides/directions. A replicated instance then would determine that it shared no common dependancies & failures.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Life as a programmer--1960, 1965?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Life as a programmer--1960, 1965?
Newsgroups: alt.folklore.computers
Date: Thu, 11 Jan 2001 18:18:12 GMT
Eric Chomko writes:
FORTRAN was out and BASIC was invented in 1966. Batch was still the norm until the 70s.

I think that a lot of programmers view batch & interactive as an either/or situation ... especially for program development and have commented about how much easier that interactive paradigm is for program development.

one of the successes of 360 (& various other mainframes) wasn't about the ease or difficulty of batch for program development but the predictability and reliability of batch for production work, like payroll checks, bill statements, etc. (nuts & bolts of corporations) ... a factor that continues today (the systems weren't necessarily targeted for program development &/or people use ... they were targeted for executing corporate business processes).

while people may prefer to use interactive system, many would loadly complain if some of the services they consider basic bread & butter became less predictable because of larger reliance on direct human element (this even extends to webserver domain ... batch based systems tend to have more built in facilities for automagically handling all the details).

even in batch sysetms, the "operator" (human element) is now being identified as one of the major failure modes. One of the large financial services identified automated operator as one of the two major factors in them having 100% availability for the past several years.

random ref:
http://www.garlic.com/~lynn/99.html#71
http://www.garlic.com/~lynn/99.html#18
http://www.garlic.com/~lynn/2000f.html#53
http://www.garlic.com/~lynn/99.html#4
http://www.garlic.com/~lynn/99.html#51

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Options for Delivering Mainframe Reports to Outside Organizat ions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Options for Delivering Mainframe Reports to Outside Organizat ions
Newsgroups: bit.listserv.ibm-main
Date: Thu, 11 Jan 2001 18:31:37 GMT
Joseph.Wenger@CA.COM (Wenger, Joseph) writes:
Has anyone out there in the great technical beyond heard of, or had any experience with an "open" product from IBM called OPEN AFS???

It is an open source enterprise file system, and enables systems to share files and resources across local and wide area networks.


is this AFS as in Andrew File System ... developed at CMU along with Camelot, MACH, etc (& heavily endowed/supported by IBM).

A lot of AFS (along with DFS, NFS, and Locus) feed into the OSF distributed file system work. Locus might be considered one of the original enterprise file system (basis for aix/370/ps2 in the past).

random refs:
http://www.garlic.com/~lynn/2000c.html#0
http://www.garlic.com/~lynn/2000e.html#27
http://www.garlic.com/~lynn/2000.html#64
http://www.garlic.com/~lynn/99.html#2
http://www.garlic.com/~lynn/99.html#63

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

what is UART?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is UART?
Newsgroups: comp.arch
Date: Thu, 11 Jan 2001 18:48:19 GMT
"dannynews" writes:
what is the relation between UART and COM port??

from some glossary

Universal Asynchronous Receiver/Transmitter. This is a device in a computer or modem that will change serial data (the way data comes in over the phone line) to parallel, and vice versa.

......

COM port is something that the software addresses ... buried deep down inside the hardware associated with a COM port can be a UART chip.

Earlier generation of machines, UART function were performed by something called line-scanners.

random ref:
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2001.html#5
http://www.garlic.com/~lynn/2001.html#17

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Small IBM shops

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Small IBM shops
Newsgroups: bit.listserv.ibm-main
Date: Thu, 11 Jan 2001 18:54:58 GMT
eheman@VNET.IBM.COM (Gary Eheman) writes:
I agree. In support of that, remember that the Fibre Channel Host adapters we put in NUMA-Q (and we recommend two per quad for redundancy plus bandwidth) can give you about 100MB/sec bandwidth on each card. Compare/contrast with 17MB/sec for ESCON.

minor note ... escon frequently tends to be 17mb/sec half-duplex (emulating older geneation bus&tag half-duplex conventions) ... while a lot more work went into FCS to make it full-duplex operation ... i.e. concurrent 100mb/sec in both directions (in theory, a fcs connection could sustain 200mb/sec aggregate, 100mb/sec in each direction).

I haven't done much work in the FCS standards area for some time, but I can remember lots of churn going on that effectively looked like somebody trying to map various bus&tag half-duplex conventions at the FCS3/4 layer.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

What is wrong with the Bell-LaPadula model? Orange Book

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is wrong with the Bell-LaPadula model? Orange Book
Newsgroups: comp.security.misc
Date: Thu, 11 Jan 2001 19:12:35 GMT
bcahillcissp writes:
Hi,

I was reading an article about the Orange book and it said that since it is based in the Bell-LaPadula model, it is not good for contemporary systems.

Why is that? What about the Bell-Lapadula model makes it that it can't work on say Solaris or Windows 2000?


all programs, users, entities, etc, have specific clearance level and all objects have specific clearance level ... all accesses then are performed with respect to clearance levels. clearance levels include support for hiearchical construct. one might degenerate to the case of all objects and all entities to single, unclassified clearance level.

definition that I happened to have handy ( see rest of references/definitions at: http://www.garlic.com/~lynn/secure.htm, this one is from rfc2828):
Bell-LaPadula model

(N) A formal, mathematical, state-transition model of security policy for multilevel-secure computer systems. (C) The model separates computer system elements into a set of subjects and a set of objects. To determine whether or not a subject is authorized for a particular access mode on an object, the clearance of the subject is compared to the classification of the object. The model defines the notion of a 'secure state', in which the only permitted access modes of subjects to objects are in accordance with a specified security policy. It is proven that each state transition preserves security by moving from secure state to secure state, thereby proving that the system is secure. (C) In this model, a multilevel-secure system satisfies several rules, including the following:

'Confinement property' (also called '-property', pronounced 'star property'): A subject has write access to an object only if classification of the object dominates the clearance of the subject. 'Simple security property': A subject has read access to an object only if the clearance of the subject dominates the classification of the object. 'Tranquillity property': The classification of an object does not change while the object is being processed by the system.

[RFC2828] (see Bell-LaPadula security model) (see also confinement property, simple security property, tranquillity property, security model) (includes -property, lattice model)


--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Thu, 11 Jan 2001 20:17:39 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
They lied to you. IBM invented the hard drive in 1956, so you'd think they'd get info like that right. It was the IBM 350 RAMAC (Random Access Method of Accounting) drive, initially on the IBM 305 computer, and it stored five million characters on fifty platters.

Even if they meant the first "winchester" disk, they invented that in 1973 and got about 30 megabytes per platter IIRC.


slight issue ... they may not have been referring to a shipped disk product. a lot of times there are press releases from R&D about proof-of-concept ... things like the quantum 5qbit machine or things like being able to do vertical magnetic recording resulting in 10* recording density.

I once saw something that looked a little like a lathe with 500 5.25in (high-density?) floppy disks all pressed together & spinning. an access arm would travel horizontally and when it got to the target floopy, it blew compressed air to create gap between the floppies and the r/w qhead would insert into the gap. I'm pretty sure it never made it out of the lab.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Options for Delivering Mainframe Reports to Outside Organizat ions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Options for Delivering Mainframe Reports to Outside Organizat ions
Newsgroups: bit.listserv.ibm-main
Date: Thu, 11 Jan 2001 21:47:59 GMT
kpneal@POBOX.COM (Kevin P. Neal) writes:
It started out at CMU (cmu.edu) and was released with source code and a license that I think was similar to the BSD license. When they released AFS 3 they removed the source code from the net and made a commercial product out of it sold by a company by the name of Transarc. Eventually Transarc was bought by IBM.

one of the things that AFS and Locus did was local file caching (as compared to say NFS that does remote file access). Locus did a lot of stuff with with partial file caching (as opposed to full file caching) with the cache persistant across system boots/ipl. Locus also did stuff with process migration across distributed processors ... even (some special?) cases involving dissimilar architectures (aix/370/ps2 an open system "answer" to SAA?).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

What exactly is the status of the Common Criteria

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What exactly is the status of the Common Criteria
Newsgroups: comp.security.misc
Date: Thu, 11 Jan 2001 22:06:00 GMT
bcahillcissp writes:
What exactly is the status of the Common Criteria?

Is it required for US Govt. systems? Is it suggested?


the "orange" book classification levels attempted to define an operating systemm security classification ... which would provide security for all applications and environments. getting into real world applications ... it became somewhat problematical to deliver real live generic security (independent of the application and operating environment).

by comparison, common criteria supports protection profile definitions for specific environments ... for instance there are smartcard protection profiles, firewall protection profiles, certification authority protection profiles, etc, i.e. rather than generic operating system support for high assurance, protection profile definitions can take into account the overall environment, including compensating processes at application or possibly business level.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Fri, 12 Jan 2001 16:03:25 GMT
jchausler writes:
Sounds like shades of the infamous "Data Cell".........

however datacell (2321) was '60s ... this lathe thing with floppies was circa '76 or '77.

datacell was more like a washing machine with tub partitioned into ten cells. each cell held strips. the tub rotated around to position the desired cell and then the desired magnetic strip would be "picked" out of the cell (i.e. strip was moved to the read/write head ... not the read/write head moving to the strip, the air flow trick was suppose to help get the strip pushed back down inside the cell). one of the failure modes was pushing/dropping the strip back down into the cell ... if it didn't align ... the strip would "accordion" and have to be replaced. ipl'ing the machine had this rythmic kachunk, kachunk, whirl sound ... as the device/cell label strip was read out of each data cell, put back in and then rotated to the next cell.

IBM 360 "disk seek" ccw op has had addressing BBCCHHR ... i.e. bin, bin, cylinder, cylinder, head, head, record ... the bin/bin was for selecting the 2321 cell/bin number (two bytes for bin, two bytes for cylinder, two bytes for head, and one byte for record).

university i was at had data cell. part of library automation project funded by navy research (ONR). we also got to be betatest site for CICS ... 2nd half of '69. I got to shoot various bugs to get it up and operational.

random refs:
http://www.garlic.com/~lynn/2000.html#9
http://www.garlic.com/~lynn/2000b.html#41
http://www.garlic.com/~lynn/2001.html#17

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Fri, 12 Jan 2001 17:29:12 GMT
cjsonnack@mmm.com (Programmer Dude) writes:
It's very similar to how MS-DOS/Windows changes addresses in an .exe file when it loads it based on a table of "addresses-that-need-to-be- changed" contained in the .exe header. I have a distant memory that the link editor also did its thing during program load.

a standard batch program sequence (i.e. 3-step "job") ... was compile, link-edit & go ... i.e. FORTGCLG ... was fortran g compile, link-edit, and go.

fortran g compile was compile fortran program generate a "text" (i.e. binary) deck written to disk ... for some of the format for such a deck:
http://www.garlic.com/~lynn/2001.html#8
http://www.garlic.com/~lynn/2001.html#14

link-edit would read the "text" deck and combine it with any library programs (like fortran libraries, scientific subroutine library, MPX library, etc), resolve external symbols, etc and write it back out to disk in "load module" format. link-edit would also be used to generate program libraries. there were 3(?, maybe 4) different versions of link/edit. link/edit was so complicated that it normally didn't all fit in memory at once ... so it had program phases that overlayed each other. The different versions tended to be how the overlays were organized & the minimum memory that they could operate in.

during "go" (program execution) it was possible to dynamically call program load to bring in additional program libraries.

"go" (load) would bring the load module back into memory, update any address constants, etc. and begin program execution.

a FORTG (one step) reference:
http://www.garlic.com/~lynn/94.html#18

because the job scheduler was so expensive for each step .. there was a lighweight "loader" developed that did the link/edit & go in a single step.

Then there were various lightweight monitors that called fortran g compile and then the loader ... in a single step ... effectively piping the output from one into the input of the other.

Then of course WATFOR came along ... which was a fortran compile & execution environment ... i.e. it generated very fast psuedo code and then executed it directly.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk drive behavior

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk drive behavior
Newsgroups: comp.arch,comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Date: Fri, 12 Jan 2001 20:50:11 GMT
Default <the-funny-thing-is--and-I-have-this-from-a-reliable-source--spammers-do-not-like-long-addresses@o-o.yi.org> writes:
When there is more data to be written than fits in the drive's buffers (or exceeding the number of outstanding commands supported by the drive), elevator scheduling by the OS can help even in the presence of tagged queueing.

Hypothetically, yes. But I replaced Solaris's (dead simple) disk scheduler with a smarter one, and couldn't measure any difference. So then I replaced it with a FCFS scheduler. Still no difference.

fwiw.


part of the problem depends on how much caching is going on and to what degree a specific workload thruput depends on arm latency.

in the 60s ... i initially carefully placed almost all system data on disk and saw significant thrput enhancement ... avg. arm latency was reduced by careful placement of data ... and got 60-70% reduction in elapsed time.

misc. refs:
http://www.garlic.com/~lynn/2001.html#26
http://www.garlic.com/~lynn/94.html#18

I then replaced FIFO arm scheduler with a very simple minded ordered seeek arm scheduler. This had the effect that as workload increased (in terms of concurrent tasks/users), the rate that individual tasks slowed down was raduced and the peak, sustained I/O rate per arm increased. The degree that different queuing algorithms differ is going to be to the degree that there is multiple concurrent requests in the queue and how they go about re-ordering the sequence that they are executed. If there is only one or two concurrent requests in the queue, there will be little or no difference between algorithms in the order that the requests get executed.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

FBA History Question (was: RE: What's the meaning of track overfl ow?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA History Question (was: RE: What's the meaning of track overfl ow?)
Newsgroups: bit.listserv.ibm-main
Date: Fri, 12 Jan 2001 23:37:01 GMT
FarleyP@BIS.ADP.COM (Farley, Peter x23353) writes:
I worked on machines with all three of those devices (4361 w/3375, 4321 w/3310 and 3370). Nice memories, thank you for bringing them up.

[OT] Why was FBA never embraced by MVS/XA/ESA/OS390? For paging and VSAM allocaion, they were so simple to use, IMHO (VM/VSE usage, anyway). Even for simple sequential, it was so much easier to deal in just one allocation unit (blocks). I don't mean to start a flame on this, just curious about the history and the reasons. I never did hear or read any good technical ones, and anyone I've questioned before has always said the decision was Armonk politics, nothing technical. [/OT]

Peter


the line i got from STL in the early '80s was that it would cost $26m to put the support into MVS ... even if they accepted code that already implemented it ... i never saw anything particularly armonk in it. in any case, it was difficult to come up with an ROI for the $26m, i.e. how much more profit would be generated if they put the $26m into FBA support ... vis-a-vis putting the $26m into new feature/function (especially with some scarce skilled resource bottleneck), aka whether FBA or CKD ... the customer was still going to buy the same number of disk drives from ibm.

random refs:
http://www.garlic.com/~lynn/97.html#16
http://www.garlic.com/~lynn/97.html#29

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

FBA History Question (was: RE: What's the meaning of track overfl ow?)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FBA History Question (was: RE: What's the meaning of track overfl ow?)
Newsgroups: bit.listserv.ibm-main
Date: Fri, 12 Jan 2001 23:54:34 GMT
i even tried using the argument that over the next 20 years (i.e. by at least 2000) that as they moved from CKD to FBA they would much more than make up the FBA costs by not having to do a whole lot of questionable acts to enhance CKD. Part of it was based on experience trying to remote disks and provide various enhanced services as described/referenced in some of the HYPERChannel work with remote device adapters:
http://www.garlic.com/~lynn/2001.html#21
http://www.garlic.com/~lynn/2001.html#22
http://www.garlic.com/~lynn/2000c.html#68

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

3390 capacity theoretically

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3390 capacity theoretically
Newsgroups: bit.listserv.ibm-main
Date: Sat, 13 Jan 2001 00:02:38 GMT
roberto_ibarra@PISSA.COM.MX (Roberto Ibarra) writes:
2,838,016,440/1024/1024/1024=2.64310878 GB,

rounding gives 2.64 GB so why do they claim their 2.8 GB???? any reasonable explanation of this?? which number should we really use? any links to information on this?


i did a alta vista search on gigabyte ... and found some places that define it is 230 bytes (i.e. base 2) and other places that define it to be a billion bytes (i.e. base 10). depends i guess on whether you take gigabyte to be in base 2 (i.e. 230) or in base 10 (billion).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

3390 capacity theoretically

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3390 capacity theoretically
Newsgroups: bit.listserv.ibm-main
Date: Sat, 13 Jan 2001 00:07:31 GMT
Anne & Lynn Wheeler writes:
i did a alta vista search on gigabyte ... and found some places that define it is 2**30 bytes (i.e. base 2) and other places that define it to be a billion bytes (i.e. base 10). depends i guess on whether you take gigabyte to be in base 2 (i.e. 2**30) or in base 10 (billion).

gone
http://www.ex.ac.uk/cimt/dictunit/dictunit.htm#prefixes
moved
http://www.convertauto.com/

yotta [Y] 1 000 000 000 000 000 000 000 000     = 10^24
zetta [Z] 1 000 000 000 000 000 000 000         = 10^21
exa   [E] 1 000 000 000 000 000 000             = 10^18
peta  [P] 1 000 000 000 000 000                 = 10^15
tera  [T] 1 000 000 000 000                     = 10^12
giga  [G] 1 000 000 000                    (a thousand millions = a billion)
mega  [M] 1 000 000                        (a million)
kilo  [k] 1 000                            (a thousand)
hecto [h] 100
deca  [da]10
1
deci  [d] 0.1
centi [c] 0.01
milli [m] 0.001                            (a thousandth)
micro [µ] 0.000 001                        (a millionth)
nano  [n] 0.000 000 001                    (a thousand millionth)
pico  [p] 0.000 000 000 001                     = 10^-12
femto [f] 0.000 000 000 000 001                 = 10^-15
atto  [a] 0.000 000 000 000 000 001             = 10^-18
zepto [z] 0.000 000 000 000 000 000 001         = 10^-21
yocto [y] 0.000 000 000 000 000 000 000 001     = 10^-24

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Disk drive behavior

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk drive behavior
Newsgroups: comp.arch,comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Date: Sat, 13 Jan 2001 16:23:04 GMT
Anne & Lynn Wheeler writes:
I then replaced FIFO arm scheduler with a very simple minded ordered seeek arm scheduler. This had the effect that as workload increased (in terms of concurrent tasks/users), the rate that individual tasks

i.e. unless the arm is active with a request and there is at least one pending request ... when a new request arrives ... there is nominally little or no algorithm "re-ordering" opportunity compared to FIFO (i.e. there is no difference between the normal check-out line and the <10 item checkout line if there is nobody in front of you). the drive in the above (2314) tended to saturate around 25-30 disk i/os assuming random service (i.e. which could be expected from FIFO).

the big benefit of simple elevator algorithm under heavy load (lots of concurrent work and long queues) is that drive could reach to 40-50 disk I/Os per second (instead of 25-30). these were 800byte to 4k byte transfers where arm latency dominated compared to transfer time and rotational delay (majority of disk service time was spent in moving the arm, so any decrease in avg. arm movement time had proportionally larger affect on total request service time).

the one caveate in all this was a service delay algorithm that I saw go into effect at large cal. financial institution in the late '70s. The claimed they measured higher transaction thruput running custom code on 370/158 than sabre/acp/tpf on 370/168. They had lots of lightweight transactions with relatively predictable execution patterns.

When a new transaction request arrived, if the current disk arm position was further than some distance from the disk location containing the transaction data; the new transaction could be delayed until 1) the arm moved closer to transaction's target location, 2) a sufficient number of other requests had arrived requiring the arm to be in the same disk locality, or 3) maximum number of delay time had elapsed. The algorithm had a tendency to burst/batch process sets of transactions with similar disk arm locality (it also was controlled by the avg. load at the moment, delaying wouldn't occur if system load was relatively light and the probability was low that other requests would arrive before the max. delay interval had expired).

This particular transaction load thruput was totally dominated by disk arm motion (i.e. by comparison cpu utilization, data transfer, etc were all trivial).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Sun, 14 Jan 2001 17:03:18 GMT
jmfbahciv writes:
I guess what I was trying to figure out was if the edit part of the link required handholding. To me, "edit" means that a human is typing on keys manipulating the file. We tried very hard not to mess with relocatables that way. We had a program called MAKLIB that would allow a human being to rearrange such things. Before that I think there was a way to PIP them together. But MAKLIB would replace a logical REL file within the disk file.

link-edit is shorthand for linkage editor ... aka edits linkage/address information ...

humans can use word processor to edit a program. however, i also know of lots of cobol code that is referred to "edit" ... which is doing automated validation of keypunched and/or typed information by humans (i.e. the human isn't doing the editing, automated program is edit/validate what the human has typed). This can include consistency/sanity checks ... like does the city/state correspond to the zip-code.

in the following, main stroage can be disk. program library can be a newly created file containing only the one load module which is passed to a "GO" step and then deleted. "object" modules are generated by compilers (and possibly other things) in addition to assemblers.

attached from glossary in IBM High Level Assembler Release 4 Installation & Customization Guide (sep. 2000):
linkage editor. A program that resolves cross­references between separately assembled object modules and then assigns final addresses to create a single relocatable load module. The linkage editor then stores the load module in a program library in main storage.

link­edit. To create a loadable computer program by means of a linkage editor.

load module. An application or routine in a form suitable for execution. The application or routine has been compiled and link­edited; that is, address constants have been resolved.


--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Mon, 15 Jan 2001 02:48:57 GMT
"G Swaine" writes:
[snip FORTRAN compiler/linker discussion]

Pardon my ignorance, but why was it called a "text" segment, when it was actually binary?

My understanding of it was that program text was the code segment containing the binary instructions generated by the compiler/assembler, with "data" or "uninitialised data" being other segments whose names seem more self-documenting to me. "Text" seems an unusual choice of name for the code/executable segment of a program file.


in 360 it was called object (deck) or txt (deck) ... less frequently binary. Pre-360 had BCD punch cards (single six-bit character per col) and binary punch cards (binary cards could have holes punched in all 12 rows of a col; 2 six bit values per col).

360 defined ebcdic which allowed 256 punch combinations ... i.e. eight-bit value in a col.

the 360 object cards (output of compilers, assemblers, etc) that contained actual machine instructions were 12-2-9TXT cards i.e. they had 12-2-9/x'02' punched in col.1 and TXT in cols 2-4).


col
1               12-2-9 / x'02'
2-4             TXT
5               blank
6-8             relative address of first instruction on record
9-10            blank
11-12           byte count ... number of bytes in information field
15-16           ESDID
17-72           56-byte information field
73-80           deck id, sequence number, or both

in the 360 assembler, the first TITLE statement with a non-blank name field would place that name field in cols 73-80 of all ouput object cards/records.

following refs has discussions of several object file formats (unix a.out, ms-dos exe, unix elf, microsoft portable executable, coff, omf)
http://www.iecc.com/linker/linker03.html
http://compilers.iecc.com/comparch/article/93-08-117

random refs:
http://www.garlic.com/~lynn/2001.html#14
http://www.garlic.com/~lynn/2001.html#59

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Where do the filesystem and RAID system belong?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where do the filesystem and RAID system belong?
Newsgroups: comp.arch
Date: Tue, 16 Jan 2001 15:40:21 GMT
lindahl@pbm.com (Greg Lindahl) writes:
There are UPSes that work this way. Some low end models are intended to be powered by car batteries, but that approach doesn't look very good in an office, so doesn't seem to be so popular. I've also seen industrial UPSes built that way; we upgraded the batteries substantially during the life of the electronics.

i had a tour of a datacenter a couple years ago ... one of the things they pointed out was the PDU (power distribution unit, looks like large double-wide freezer) and described a problem they were having ... it was taking 10s of milliseconds too long to re-act (i.e. like switching between line and backup, they had feeds from two different substations plus large diesel complex out back; the power feeds from the two different power substations routed to the bldg from opposite directions). Recently, they had hired three engineering firms to redesign the PDU and gave the redesign to the PDU manufacturer. During the tour they commented that over 1000 of the redesigned PDUs had already been bought by other operations (including gov. agencies).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

California DMV

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: California DMV
Newsgroups: alt.folklore.computers
Date: Tue, 16 Jan 2001 17:30:52 GMT
bbreynolds@aol.comskipthis (Bruce B. Reynolds) writes:
In article <20010115135907.13260.00001680@ng-fg1.aol.com>, klatteross@aol.commmm (Ross Klatte) writes:

I was recently told that some time during the past 5 or 10 years California DMV attempted to install a state-of-the-art computer system, but that the system was a flop and the project was abandoned. Is this in the category of "folk lore"? If true, where can I study up on the incident?

Not folklore: for some background, here is the text of an article from the February 1996 issue of Byte: ==================================================


... lots of snipping

misc. comments

I was at university in '69 and we were (a?) betatest site for CICS, I got to do various support and debugging on it.

Series/1 had two infrastructures/systems ... "official" one from Boca ... RPS (joke that it was some transplanted Kingston engineers that were trying to re-invent OS/MFT on 16-bit mini) and EDX (event driven executive) that was done by some grad. students at San Jose Research for the physics department ... EDL was part of the EDX infrastructure (not aware of DMV with something else also called EDL). The S/1 stuff would have been mid-70s, not 60s.

there were competitive benchmarks for DMV infrastructure of both Tandem and IBM equipment in 1988 and 1989.

newsclip from San Jose Mercury News, 16 Oct 89
Tandem prepares to enter a new stage of competition with IBM - savors a decisive victory in a head-to-head showdown with IBM - California Department of Motor Vehicles . DMV is one of the largest state agencies anywhere . heavily reliant on computers . closely watched by many other prospective purchasers . Used IBM computers to maintain its huge databases for 20 years - 30 million vehicles - 20 million driver's licenses - 900,000 transaction by officials/law enforcement daily - Laura Gruber, overseeing a project to modernize DMV databases . 1986: $40 million project to modernize hardware/software . "system is nearing the end of its life cycle" . 1988: DMV narrowed it to 2 vendors; IBM and Tandem . 3-1/2 months of testing by DMV - IBM 3090-400S passed only 2 of 10 tests - Tandem 2-VLX passed 9 of 10 tests (failed one) - IBM senior managers "were aghast" at the results . "They don't know how to deal with the awesome disparity" - Tandem executives . Before the test, were confident they would beat IBM . But they didn't expect such a decisive outcome . "We not only surprised the world, we surprised ourselves"

"If the tests were repeated today, the findings would be much more lopsided." - Tandem used 2 VLX computers, introduced in April 1986 for the tests - Cyclone, introduced today, "is 3-5 times more powerful than the VLX models that trounced IBM's workhorse mainframe" - Contract negotiated by the agency requires Tandem to supply its top model . As of now, that's Cyclone . Agency has taken delivery of one VLX . Has begun to develop software for the new system

Tandem, formed in 1974 - specialized on highly reliable, fault tolerant computers for online transaction processing . continuous operation . fast, frequent updates to data bases - New York Stock Exchange . Keeps track of securities trading . Has used Tandem for years . Served as a test site for Cyclone . Cyclone was an outgrowth of the Exchange's difficulty in handling enormous volumes of trades during the market crash 2 years ago - Trade volume exceeded 600 million shares, Oct 19, 1987 - Stock Exchange immediately began planning for a computer system to handle a 1-billion share day - Gerald L Peterson, Tandem senior VP for sales/marketing . "We locked ourselves in a room for 6 weeks, and made some very basic decisions about our product strategy." . Many computer companies were moving toward increased reliance on workstations and PCs . There was a lot of feeling that workstations would take over the world ... high-end systems would go down the tubes . They concluded the computer business was heading in 2 directions - Large corporate customers were consolidating databases and putting work stations on the desk top ... simultaneously - They decided to go after the very large databases and provide tools for connecting to workstations - Most large databases were designed 15-20 years ago . reside on traditional mainframe computers . ill-suited for transaction processing . companies are looking for fast access to their data - If Tandem was going to be in that business, . they needed a big machine . they had to confront IBM, which has dominated that market . Tandem adjusted its requirements for Cyclone - to handle many of the traditional mainframe jobs - still being primarily suited for online transactions . "Competing so directly against IBM mainframes is pretty scary, but when you think about it, there's no place you can hide today from IBM." . "Besides, we've been taking business away from IBM for years ... we're used to it." . "We've been marching up that hill for years" . "Our sales force has always had very good weapons, now we're going to give them an atomic bomb."


Are the L1 and L2 caches flushed on a page fault ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are the L1 and L2 caches flushed on a page fault ?
Newsgroups: comp.arch
Date: Tue, 16 Jan 2001 21:16:50 GMT
karthik_p writes:
Hi all,

Consider a system supporting paged Virtual Memory, in which the processor caches are addressed by physical addresses (I know all about virtually addressed caches, but I don't want to use them for various other reasons).

Now a page fault comes along for a non-existent virtual page, say v1, and it is brought into some physical page frame, say p, by the OS. Now if there was some other virtual page mapped into p before, say v2, the OS will invalidate it in its page table, swap it out and swap v1 in its place and all is fine.

But consider the L1 and L2 caches. Since these are addressed by physical address, the request for a data at some address in virtual page v1 will be translated into the physical address p by the MMU, which in turn may exist in the cache (by virtue of a previous access in v2). But the data in this cache line corresponds to v2 and not to v1, so it is no longer valid !

My question is: Is it possible to selectively invalidate only those addresses from the page being swapped out, in the cache or must we flush the entire cache ?

What do current systems do ? Also is there any ISA which allows selective flushing of cache lines ?

Thanx in advance,

- P.K


the original 370 architecture had an IPTE instruction ... selective invalidate page table entry (as well as ISTE, selective invalidate segment table entry)... which turned on the invalid bit in the page table entry ... and flushed appropriate hardware entries ... table look aside buffer, cache, etc. Because of problems with implementing IPTE (and some other things) on 370/165 it was dropped from the initial release of 370 virtual memroy (in all models of 370, even tho it had already been implemented in other models) ... leaving just the PTLB instruction (purge look aside buffer) which wiped everything. IPTE for selective invalidate didn't actually make it out the door until the 3033 in the late '70s.

the introduction of the 168-3 did have a problem. 370 architecture supported both 2k & 4k pages (based on mode bit for whole address space). Normal "high-end" operating systems used 4k pages ... some of the lower-end 370 operating systems used 2k pages. One of the features of the 168-3 (compared to the original 168) was it doubled the size of cache ... and used the "2k" bit to index the additional entries and the 168-3 was operating in 2k-page mode, only half of the cache was active. The other characteristic was that whenever a switch occured between 2k-page mode and 4k-page mode ... the complete cache was flushed.

Many environments tended to only run in one more or another. However, some customer installations had tasks which intermixed 2k-page virtual memory and 4k-page virtual memory ... and any time a task switch occured involving different modes, the complete cache was flushed. These customers experienced a noticeable performance degradation upgrading from 168-1 to a 168-3 (with twice the cache size). The additional cache was only available for select subset of the workload ... but there was a very expensive complete cache flush that could occur at task switch.

following from:
http://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt
Product or Event Ann. FCS Description IBM S/370 ARCH. 70-06 71-02 08 EXTENDED (REL. MINOR) VERSION OF S/360 IBM S/370-155 70-06 71-01 08 LARGE S/370 IBM S/370-165 70-06 71-04 10 VERY LARGE S/370 IBM S/370-145 70-09 71-08 11 MEDIUM S/370 - BIPOLAR MEMORY - VS READY AMH=AMDAHL 70-10 AMDAHL CORP. STARTS BUSINESS IBM S/370-135 71-03 72-05 14 INTERMED. S/370 CPU IBM S/370-195 71-07 73-05 22 V. LARGE S/370 VERS. OF 360-195, FEW SOLD Intel, Hoff 71 Invention of microprocessor IBM 168 72-08 73-08 12 VERY LARGE S/370 CPU, VIRTUAL MEMORY IBM OS/VS1 72-08 73-?? VIRTUAL STORAGE VERSION OF OS/MFT IBM OS/VS2(SVS) 72-08 72+?? VIRTUAL STORAGE VERSION OF OS/MVT Intel DRAM 73 4Kbit DRAM Chip IBM 168-3 75-03 76-06 15 IMPROVED MOD 168 IBM 3033 77-03 78-03 12 VERY LARGE S/370+EF INSTRUCTIONS

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Are the L1 and L2 caches flushed on a page fault ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are the L1 and L2 caches flushed on a page fault ?
Newsgroups: comp.arch
Date: Tue, 16 Jan 2001 21:27:39 GMT
Anne & Lynn Wheeler writes:
the original 370 architecture had an IPTE instruction ... selective invalidate page table entry (as well as ISTE, selective invalidate segment table entry)... which turned on the invalid bit in the page table entry ... and flushed appropriate hardware entries ... table look aside buffer, cache, etc. Because of problems with implementing

... oh, & the 370 implemented cache consistency for i/o operations.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

California DMV

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: California DMV
Newsgroups: alt.folklore.computers
Date: Tue, 16 Jan 2001 23:40:35 GMT
klatteross@aol.commmm (Ross Klatte) writes:
The California DMV story is fairly intimidating, considering their huge resources in money and high-tech people. Do I understood it correctly, that that agency is still operating on IBM mainframe, running EDL code, to this day?

don't know what they currently run ... but

minor correction ... ibm mainframes would tend to be mostly cobol & bal/assembler (aka machine language). EDL/EDX was run on the series/1 minicomputer. There aren't many series/1s around any more. Last one I actually saw was five years ago that was a telecommunications interface into the mastercard network.

While the mainframes went thru many generations with backward compatibiilty from the mid-60s to the current day ... I would expect any series/1 that were still around were manufactured 15-20 years ago.

i did search on series/1 ... turned up a lot of resumes ... but a couple others ..
http://www.rs6000.ibm.com/resource/pressreleases/1998/Jul/datatrend.html
http://www.migsol.com/edx.htm
http://www.rit.edu/~kjk2135/RTOS.htm
http://www.dtrend.com/app_migration.html

... as to the previous bomb reference ... it might have referred to the ability to convert a customer base that was/is firmly entrenced in their devotion to mainframes.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Wed, 17 Jan 2001 15:40:21 GMT
"dannynews" writes:
as title and what is its function?

thanks!


the 360 architecture had a PSW ... 8-byte program status word which contained the instruction counter/pointer, condition status, protection key, problem/supervisor mode bit, ascii/ebcdic mode bit, wait state mode bit, external/timer interrupt mask, machine check interrupt mask, i/o channel interrupt mask (one bit for each of the 7 i/o channels in the 360), fixed point overflow mask, decimal overflow mask, exponent underflow mask, significant mask. the interrupt masks, enabled or disabled the processor from taking interrupts on the specific condition.

the 360/67 introduced the concept of channel controller and consolidated channels across all processors in a multiprocessor (two to four) configuration. the 360/67 aloo introducted "control registers" to contain additional control information (like virtual memory segment table pointer) not available in the basic 360 machines. in order to accomodate each processor being able to address up to 47 i/o channels, the i/o channel interrupt masks were moved from the PSW to one of the control registers and the seven i/o interrupt mask bits in the PSW redefined. One of the redefined bits was to mask off all i/o interrupts. If the PSW I/O interrupt mask bit was enabled then the processor would use the selective channel i/o mask bits to determine whether or not to present i/o interrupt from that specific channel.

the 67 also had extended external interrupts and machine checks. the corresponding bits in the psw became summary masks (for all classes) and the detailed masks for specific external and machine check interrupts were moved into one of the control registers.

the 360/67 put a psw-mode bit into one of the control registers. on power/on/reset, the psw-mode bit default to standard 360 psw. changing the control register psw mode bit switched the psw mode format from standard to (67) extended.

the other four interrupt masks (fixed point overflow, decimal overflow, exponent underflow, and significant) stayed in the PSW.

360/67 was one of the later 360s, not announced until aug. of 65 and started shipping june of '66.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

future trends in asymmetric cryptography

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: future trends in asymmetric cryptography
Newsgroups: sci.crypt
Date: Wed, 17 Jan 2001 16:01:44 GMT
rasane_s writes:
Mr Ashwood/Joe,

Thankyou.

In article <uCT8n1$fAHA.281@cpmsnbbsa07>, "Joseph Ashwood" wrote: We have the beginnings of this process now, with the questioning of X.509 by several researchers over the last few years.

sir, would like to read more on this. could you provide any links to papers...


x.509 isn't so much about asymmetric cryptography but about key distribution and information binding between parties that have no prior business relationship, especially in an offline environment. for centuries, businesses have used account records for information binding ... especially for timely and aggregated information. x.509 identity certificates also can represent a signficant privacy issue, especially with respect to retail financial transactions.

several process scenerios have been looked at that once two entities establish some sort of (business or other) relationship, that certificates for the purpose of establishing some level of reliance between two parties become redundant and superfluous ... especially with respect to online electronic transactions that might involve timely and/or aggregated information (compared to stale &/or privacy information bound into certificates).

there has been a mapping of the recently passed account-based secure payment objects standard to an account-based public key authentication process (as opposed to a certifcate-based public key authentication process).

misc. refs at
http://www.garlic.com/~lynn/
http://www.garlic.com/~lynn/8583flow.htm
http://lists.commerce.net/archives/ansi-epay/200101/msg00001.html
http://web.archive.org/web/20020225043356/http://lists.commerce.net/archives/ansi-epay/200101/msg00001.html

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

California DMV

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: California DMV
Newsgroups: alt.folklore.computers
Date: Wed, 17 Jan 2001 16:18:50 GMT
bbreynolds@aol.comskipthis (Bruce B. Reynolds) writes:
The Series/1 was manufactured until 1992, in order to meet prior commitments, most particularly to Prodigy, which used the machines for Prodigy "Classic" thru October, 1999, cutting off the service with a Y2K excuse (as the Series/1 does not have a epoch clock, there is no Y2K problem with the machine per se: RPS and EDX actually did/do handle Y2K correctly with some display problems (failure to zero-fill the date year field)). Service was available thru 1999, I believe.

the last time i tried to order a series/1 was mid-80s. I was told at the time that there was now a year's waiting list, because recently some company had placed a large order of several hundred machines ... which was a year's manufacturing output. I visited the company involved and they had uncrated boxes filling the hallways and I tried to cut a deal for a couple (to help to cut down on their hallway congestion problem).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Wed, 17 Jan 2001 21:53:42 GMT
Konrad Schwarz writes:
In what situations does it make sense to disable the machine-check interrupt?

from 360 model 67 reference data "blue card" (229-3174)

control register 6


bits
0   1
-----
0   0   only CPU machine checks will be recognized
0   1   cpu and channel controller 1 machine checks will be recognized
1   0   cpu and channel controller 0 machine checks will be recognized
1   1   all machine checks will be recognised

CPU and channel controllers were independent processors sitting on the memory bus ... any of them could indicate machine check.

also in CR6 were bits 26, 27, 28, 29 ... for masking external signals 2-7 indicating specific CPU malfunction alert.

i.e. control registers were processor specific ... so a processor could enable/disable malfunction alerts from other processors ... and/or once it was determined that a specific processor was no longer usable and/or active turn off getting any additional malfunction alerts from that machine.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Wed, 17 Jan 2001 23:40:50 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
Yes I agree. A new technique for responding has been revealed. In addition to those of us who use homework questions as occasions to exercise our creative writing and imagination, preferably with some humor, we now have the art form "totally accurate exposition on the topic at a level of no use to the asking student" I love it.

Now the mystery: Did Lynn do it on purpose? :-)


is it possible to correctly answer either way ... although I do remember using a similar process during a talk I once gave in YKT ... it really upset the person that asked the question.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Thu, 18 Jan 2001 16:01:46 GMT
Konrad Schwarz writes:
I am not familiar with the S/360 architecture, but came across machine check execeptions and checkstop condition in PowerPC. It seems the terminology and technology existed long before. Were they introduced with 360, or did they exist before?

What sort of failures causes a S/360 machine check? AFAIR, PowerPC 603 machine check was primarily caused by accessing a non-existing physical address. Did some S/360 models have permananent boundary-scan testing or something similar to ensure that the unit was still working?


a s/360 machine check that resulted in "red-light" check stop ... that didn't even bother to try & generate an interrupt was memory update. there were also several things that you could set on the front panel enabling checking for various conditions that would also put the machine into check stop mode.

when i was an undergradudate, a couple of us were building a 360 control unit replacement which required building a card that interfaced/attached to the 360 channel (somewhere this effort is documented as originating the 360 OEM control unit business ... oem - other equipment manufactur). the cpu and channels shared the memory bus. one of the other things that was on the memory bus was the location 50 "timer" ... i.e. 32bit word at location x'50' in low-storage. the timer would tic every 13.? microseconds (this was high-resolution timer on 360/67, the lower-end machines had timer that tic'ed less frequently) and attempt to update the value at location x'50'. if the timer tic'ed again before it was able to obtain the memory bus to update location x'50' from the previous tic ... the machine would red-light/check-stop.

one of the problems that we caused when we were testing our board was holding the memory bus for two consecutive timer tics ... w/o freeing the memory bus to allow other accesses.

random refs:
http://www.garlic.com/~lynn/99.html#12
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2001.html#5

...............

conditions that would cause machine check interrupt and store indication code (from system/360 model 67 reference data "blue card" 229-3174):
more than one associative register contains identical information, or one of the comparing circuits is at fault (67 had a fully associative table look aside buffer for virtual address lookup)

a succesful compare is achieved with a virtual address that is higher than the address in the segment table

the virtual address portion of the translated address just stored in the associative array does not compare with the virtual address that should have been stored

a reset of the load-valid bits in the associative array was unsuccessful

parity of the adder sum is inconsisten with the predicated parity

parity of the virtual address was incorrrect when received by the associative array

parity of the data word from storage was incorrect when received by dynamic address translation circuity

parity of instruction bits 8-15 was incorrect when received by dynamic address translation circuitry


..............

reference to esa/390 principle of ops (don't know of anyplace that has a 360 principle oof ops online):
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/CONTENTS?SHELF=#A%2e6%2e2

chapter 11 on machine-check handling
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/11%2e0

11.4 check-stop state
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/11%2e4?SHELF=

from above:
In certain situations it is impossible or undesirable to continue operation when a machine error occurs. In these cases, the CPU may enter the check-stop state, which is indicated by the check-stop indicator.

In general, the CPU may enter the check-stop state whenever an uncorrectable error or other malfunction occurs and the machine is unable to recognize a specific machine-check-interruption condition.

The CPU always enters the check-stop state if any of the following conditions exists:

PSW bit 13 is zero, and an exigent machine-check condition is generated.

During the execution of an interruption due to one exigent machine-check condition, another exigent machine-check condition is detected.

During a machine-check interruption, the machine-check-interruption code cannot be stored successfully, or the new PSW cannot be fetched successfully.

Invalid CBC is detected in the prefix register.

A malfunction in the receiving CPU, which is detected after accepting the order, prevents the successful completion of a SIGNAL PROCESSOR order and the order was a reset, or the receiving CPU cannot determine what the order was. The receiving CPU enters the check-stop state.

(c) copyright IBM corp.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

California DMV

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: California DMV
Newsgroups: alt.folklore.computers
Date: Fri, 19 Jan 2001 04:19:24 GMT
jsaum@world.std.com (Jim Saum) writes:
IIRC, the System/7 FDP was called something like LABS/7, and as distributed it assumed the use of the System/7's ACCA (high-speed asynch comm adapter) to talk to a mainframe host.

when my wife and I were running HSDT in the '80s ... we also worked with PNB trying to turn their series/1 based stuff into a product and also porting to rios. it had a channel adapter card for simulating 37xx/ncp to the mainframe. in some sense it & perkin/elmer box were competing with 37xx ... and the perkin/elmer box was something i had started as undergraduate nearly 20 years previously.

random refs.
http://www.garlic.com/~lynn/99.html#70
http://www.garlic.com/~lynn/99.html#67
http://www.garlic.com/~lynn/2000b.html#66
http://www.garlic.com/~lynn/94.html#33a
http://www.garlic.com/~lynn/99.html#12
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2001.html#5
http://www.garlic.com/~lynn/2000c.html#43
http://www.garlic.com/~lynn/94.html#33b
http://www.garlic.com/~lynn/2001.html#71

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

how old are you guys

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how old are you guys
Newsgroups: alt.folklore.computers
Date: Fri, 19 Jan 2001 15:24:55 GMT
roggblake@inamme.com (Roger Blake) writes:
And also very interesting that the unelected bureacrats of the FCC are now throwing away a much larger installed base for the sake of their incompatible "HDTV" system. You really have to wonder who was paid off, how much, and by whom for the authorization of this blatant and enourmous consumer ripoff.

i don't know about any of that ... but in the early '90s i was on the HDTV technology mailing list ... never did actually attend any of the meetings ... but my impression was there was lots & lots of churn about national competiveness ... commerce dept. making press releases about vitality of the country etc. ... misc. stuff from archives (much more oriented towards HDTV technology not just being a TV upgrade but common technology across a number of industries ... which might tend to drive up initial costs ... but if truely common components might eventual drive down costs with volume).

To: cohrs-arch@media-lab.media.mit.edu
Subject: ccir info 5sep90, draft 25aug90 (part 1 of 2)

TO:       US CCIR Study Group 11, Interim Working Party 11/9

FROM:     Architecture Working Group (Chair: Gary Demos)
Committee On Open High Resolution Systems

DATE:     5 September 1990

SUBJECT:  Considerations for Cross Industry Harmonization of HDTV/HRS

As an informal group of professionals working in the computer,
broadcasting, imaging and entertainment industries, we are highly
supportive of recent CCIR initiatives to harmonize technical standards
for high resolution/high definition television systems across
industries.  Concurrent with these CCIR developments, we too have been
studying the technical questions that would facilitate the growth of
cross-industry HRS/HDTV.  Out of that experience, we offer the attached
submission as an appropriate set of issues for IWP 11/9 to consider.
The list is by no means complete and likely will need refinement in
light of future consideration.

I.   Scalable High Resolution Systems Issues

A.   List of industries which could benefit from compatible HDTV/HRS
system architectures.

B.   List of current and possible future applications of HDTV/HRS
across industries.

C.   Design criteria, common parameters, and requirements in order for
HDTV/HRS to operate across industries and applications.

Design criteria are not meant necessarily to be absolute constraints,
but rather to offer a starting point for deliberations and a measure of
the results of the deliberations.  A list of criteria might include:


o application to film and video post production o application to computer workstations and personal computers o transmission/distribution through (and among) terrestrial broadcast (6 MHz), cable, satellite, fiber, computer networks, videotape, videodisk, theatrical release, etc. o down conversion to NTSC, PAL, and SECAM o high-quality flicker-free viewing in various environments (e.g., viewing distance, angle, lighting) o handling of still frame images The demands of different industries and applications vary. For example, some industries use imaging which is not spatially bandwidth limited, including computer displays containing text, windows, and graphics. Flicker rates higher than 70 Hz may be required for such imagery when displayed on CRT displays while active matrix flat panel displays may be flicker-free at much lower rates. It has also been found that interlace is not acceptable on CRT computer display screens. As another example, broadcast television motion update rates for sports and other coverage may have a minimal threshold which is possibly near 45-50 Hz. D. Scalability in resolution, temporal rates, colorimetry, and intensity dynamic range, as a criteria for international standards for high resolution systems. In an ideal world, one could scale from any resolution set (vertical, horizontal, temporal) to any other resolution set with no loss in image quality, and with minimal computational cost. Unfortunately, we do not live in an ideal world. Even if we could afford arbitrary complexity of filtering hardware to scale between any two resolutions, we still incur a certain amount of information loss during the resampling process among many transcoding sets. Indeed, it may turn out that only a small set of transcoding sets exist for which the transcoding cost and information loss is minimal. Nevertheless, different industries require imaging systems across a wide spectrum of resolutions and frame rates. It is desirable that such systems easily exchange data and program material between them. This feature of interoperability is critical and the confluence of computing, telecommunications. and entertainment demands a solution. What forms should the scalable video standard take? The issue is slightly more complex than for other standards, since it is intended to be very general, to cut across many industries, and to permit other imaging standards to be subsets of it. E. Form of technical guidelines that would permit optimal scalability among members of a family of resolutions, temporal rates, colorimetry, and intensity dynamic ranges. There are many interesting and important resolutions, temporal rates, colorimetry, and intensity dynamic ranges that will not be in a transcoding set which permits maximum signal preservation with minimal computation. For example, when using certain values as a base resolution for a transcoding set (and allowing factors of two or three for fractional or whole number scaling), neither NTSC, PAL or CCIR 601 fall into the set. Two options exist for such cases. Option 1 consists of acknowledging that such sets exist, but not recommending what to do about them. Option 2 consists of providing guidelines to be used when transcoding between these sets. Option 2 guidelines might include: (a) how to scale to resolutions not in the set (i.e., what are the appropriate filtering techniques at reasonable cost), (b) a rigorous quantification of the effective resolution loss for those transcoding sets/filters, and (c) alternatives for transcoding that could minimize information loss but which modify picture organization (such as the use of border areas or side cuts). F. Planned extensibility of international standards and guidelines for these extensions for future improvements in resolutions, temporal rates, colorimetry, and intensity dynamic range. To ensure a long-lived useful standard (or family of standards) in light of rapid technological advances, it seems desirable to accommodate future resolution increases -- thus, an extensible family of resolutions, temporal rates, colorimetry, and intensity dynamic range.

For example, when transcoding an image from sampling rate A to rate B, the higher the beat frequency from A to B, or the shorter the repeat distance of the cross sampling process, the simpler the required digital processor. Also, the shorter the repeat distance of the cross sampling process, the better the perceived quality of the resulting image, particularly for image features with high spatial frequencies approaching or above the Nyquist rate (e.g., alternate black and white pels). A simple fraction rule characterizing the gross sampling ratio, such as: a/b = 2^n 3^m where n=..,-1,0,1,.. m=-1,0,1 yields filters which provide effective and convenient transcoding among signals. This produces ratios of the form of: 1/4, 1/3, 3/8, 1/2, 2/3, 3/4, 1, 4/3, 3/2, 2, etc. The ramifications of such a simple fraction rule are fundamental to digital signal processing (c.f., Nyquist, Shannon). Qualitative considerations suggest that the penalty can be very large as one departs from a simple ratio of two small numbers. It is widely known that observers tend to judge images based on the quality of the worst (as opposed to the overall average) artifacts contained. Therefore, it is reasonable to focus on the highly aliased portions of resampled images, i.e., those portions where the ramifications of the simple fraction are most critical. The testing choices here are important to make quantitative measurements meaningful. G. Qualitative and quantitative methods for choosing candidate base resolutions and temporal rates for an extensible standard. The simple fraction rule suggests the selection of a basis value for resolution and temporal rate from which can be derived an extensible compatible family. A list of criteria for selecting a basis might include: ease of constructing low cost frame buffer memories, ease of frame buffer memory addressing, ease of transcoding to international video or imaging standards, ease of converting from existing film/video libraries, ease of building cameras and production equipment, etc. Until such criteria are derived and priorities and ramifications are characterized across industries and applications, it is difficult to compare the merits of alternative basis proposals. H. Use of linear, logarithmic, quasilog, and XA-11 transfer functions in the intensity representation used for digital pixel values across industries and applications. I. Characterization of errors introduced in conversion between these different pixel representations, and guidelines for required number of bits allocated in each representation. The digital representation of pixels using linear light (lux) can give excellent results for spatial filtering operations. However, a logarithmic representation seems more appropriate at times for storage in image memories and frame buffers. In graphics arts applications, a quasilog representation is common. Thus, it is important to try to characterize a representation that is amenable for use across industries and applications. J. Merits of regional sync among multiple sources to minimize buffering and latency when accepting simultaneous signals from multiple sources. Local transmission buffering to allow vertical retrace synchronization to the nearest temporal basis rate sync time could be beneficial as a global assist for efficiency and economy. Transmission buffering at the regional repeaters for signals, such as terrestrial broadcast transmitters, cable head ends, computer interactive video sources, and network nodes, would be beneficial. It would minimize buffering at every display, and would potentially minimize latency during channel or signal source switching. Also, it would enable multiple channels to be displayed simultaneously on a single screen without full frame buffering for each channel. Certain signal sources, such as direct broadcast satellite, may not be able to synchronize regionally due to large coverage areas and inherently large propagation delays in the signal transit from satellite to receiver dish. For those sources where local interlocked sync is possible, benefits to the capability and economy of the high resolution receiving system will accrue. II. Signal, Compression, and Transcoding Issues A. Relationships between various compression techniques and potentially different requirements across industries and applications. Compression is expected to be an important part of high resolution system architectures in order to conserve storage, memory, and transmission bandwidth. Compression algorithms with a minimum of loss exist with broad applicability. High quality compression algorithms are showing continuous improvements and significant performance. Since images may be compressed and decompressed at a number of digital processing steps, compression algorithms which do not significantly degrade the image after the first compression will be useful in applications where an image must be reconstructed close to the original such as in certain post-production activities, scientific imaging, etc. B. Optimal transcoding between different compression algorithms for digital video data. The maintenance of maximum signal when a compressed digital video sequence is converted from one compression format to another needs to be considered. Since compression will be a critical component of any digital video system and interoperable systems are desired, then how to transcode in the compression domain needs to be understood as well. Signal to noise ratio (SNR) degradation when performing compression transcoding operations might be a useful metric. C. Definition of a quality space over which one can evaluate the merits of various transcoding schemes. There arm at least three metrics for evaluating the quality of compressed/transcoded imagery. These are: (1) linear information loss, such as signal-to-noise ratio (SNR) and peak signal-to-noise ratio, (2) structural information loss, such as degradation of edges or smooth surfaces, annoying quantization noise, etc., and (3) subjective and perceptual measurements. These metrics and others could form the basis for a test suite for different algorithmic approaches. D. Scalable approaches in image transmission/storage systems based on block transform and/or sub-band decompositions. High-frequency signal components can be used by receivers which have the necessary display resolution. Decoding of successively higher resolution imagery can be performed by receiver modules which increase in complexity and expense with signal bandwidth. A family of bandpass or lowpass signals transmitted by the source can greatly reduce the For example, a full resolution signal can be accompanied, without compression, by a complete set of downsampled counterparts at powers of two ratios, at only a 1/3 increase in bandwidth. The question is, by what factor is it reasonable to increase transmission bandwidth to minimize the cost of flexible reconstruction at a variety of different receivers? The answer greatly impacts the question of scalable resolution and the tactics for effecting it in the receiver. E. Relative merits of using YUV encoding with unequal channel resolutions for data reduction (as compared to RGB with equal resolution) for different applications. The eye's sensitivity to U and V resolution and brightness appears less than Y in many demonstrations. However, for the case of blond hair, flesh tones, gold lettering, or blue water, the reduced sharpness in V will often create a perceptible blur. That is, if a U or V channel is seen juxtaposed to a strong Y channel, its relative perceived information level will be higher. However, when the information is primarily contained in U or V, and these channels are isolated from changes in Y or each other, it has not yet been demonstrated that the degradation is acceptable. Also, the use of blue-screen composites or other special effects techniques may require U or V signal integrity beyond normal perceptual requirements. For these reasons further investigation of RGB formats, or other amounts of data reduction in the resolution of U and V, different from the usual 2:1 in Y, and 2:1 or 4:1 in V, might be warranted. Other color spaces are also commonly used such as Hue Saturation and Value (HSV), and Yellow, Cyan, Magenta, and Black (YCMK, used in printing). Investigation of the ramifications of conversions between different color spaces, in light of resolution differences in different color components, might be worthwhile. F. Interlace and interoperability among systems and applications. From a purely technical perspective, interlace may be viewed as a lossy form of image compression which is irreversible and prone to artifacts. Also, interlace may be less appropriate for certain applications, such as computer displays. An investigation of the difficulties which would come from an interlaced system in attempting to achieve interoperability across industries is warranted. G. The role of frame buffers in system architectures, and their affect on decoupling transmission rate, display update/refresh rate, and capture rate. Frame buffers are likely in many high resolution systems. These and other portions in the chain from image capture to image display need not necessarily operate identically, but rather may each be independently optimized. However, if links in the chain operate differently, it is potentially valuable to make each link be compatible through the chain as a family. H. The role of pre- and post-filtering in the overall design of high resolution architectures. High resolution images are typically sampled and then communicated or stored using a possibly noisy channel before being displayed. The resultant image quality can be substantially improved if the image is properly filtered prior to sampling (pre-filtered), and then properly filtered again prior to display (post-filtered). The nature of the filters giving optimum performance depend very much on the statistical character of the imagery, the nature of any channel (or other) noise, and the perceptual characteristics and preferences of the viewer. Since there is no single optimum pair of filters, certainly across multiple industries and applications, and since images may not be displayed until much later when displays and user preferences are different, it may be desirable to label high resolution data with the prefilter used to generate it and perhaps with the identity of the recommended postfilter. This information could be provided directly or indirectly via the universal descriptor discussed below. I. Guidelines for image filtering in a scalable video system. One possibility for how to offer guidelines for transcoding between different resolution sets (in and out of a family) is to provide a mechanism for a parameterised filter. The parameterised filter would take the desired source and target spatial resolutions for a transcoding operation and return the appropriate filter kernel and filter width. There could be two modes: transcoding with whole numbers and transcoding with fractional numbers. J. Considerations with respect to the image-capture mechanism. Transcoding from one video format to another generally involves filtering (interpolation) and resampling. In considering the entire process from the real-world original to the final real-world display, it is desirable to have precise knowledge of the complete processing chain. The focal plane image is often limited in quality due to quantum effects and/or optical deficiencies, either inherent or due to imperfections of components. It is also helpful to know the linear and/or nonlinear processing to which the optical image was subjected before the video signal was created. These effects depend, among other things, on the physics of the devices and the signal processing that was used. If presented with a video signal whose gestational characteristics are not well known, optimal transcoding may not be possible. To put it another way, there may well not be a single best way to transcode between two different formats if the video signals were derived in widely differing manners. Further study may produce recommendations for standard image- capture techniques to avoid these problems. K. Relationships between flying spot analog systems and digital fixed pixel raster systems. Flying spot digital systems do not have the same sharpness as fixed pixel raster systems. Flying spot analog systems may degrade upon being digitized, due to the spot motion and coverage being sampled on discrete pixels. Examples of fixed pixel raster devices are CCD camera sensors and active matrix flat panel displays. Some CRT film scanning systems also use "point plot rasters" where the spot samples each pixel without motion. And in CRT computer displays, a flying spot CRT displays a frame-buffer digital image. All of these examples may produce complex relationships that need to be studied to determine optimum interoperability parameters. L. Range of acceptable number of A/D and D/A transformations for various applications. Each time an analog value is digitized, and each time a digital value is converted to analog, signal error is introduced. These errors are generally given the term "quantization error", but the nature of these errors can be very complex. In general, the error may be reduced when a greater number of bits are used for the digital representation, but issues such as logarithmic or linear representation, color, and other factors may be significant as well. To minimize errors introduced at each analog to digital (A/D) and digital to analog (D/A) conversion, or for some specified level of signal preservation, recommendations may be required as to the number of conversions acceptable for different applications.
--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/


previous, next, index - home