List of Archived Posts

2001 Newsgroup Postings (05/24 - 06/27)

Anybody remember the wonderful PC/IX operating system?
Anybody remember the wonderful PC/IX operating system?
Mysterious Prefixes
Oldest program you've written, and still in use?
some VLIW (IA-64) projections from January, 1999...
Emulation (was Re: Object code (was: Source code - couldn't resist compiling it :-))
Oldest program you've written, and still in use?
Oldest program you've written, and still in use?
Theo Alkema
Theo Alkema
5-player Spacewar?
Climate, US, Japan & supers query
5-player Spacewar?
5-player Spacewar?
5-player Spacewar?
Medical data confidentiality on network comms
Wanted other CPU's
Accounting systems ... still in use? (Do we still share?)
Accounting systems ... still in use? (Do we still share?)
offtopic: texas tea (was: re: vliw)
VM-CMS emulator
Theo Alkema
Early AIX including AIX/370
MERT Operating System & Microkernels
Question about credit card number
Question about credit card number
Price of core memory
Design (Was Re: Server found behind drywall)
IBM's "VM for the PC" c.1984??
Question about credit card number
IBM's "VM for the PC" c.1984??
Remove the name from credit cards!
"SOAP" is back
IBM's "VM for the PC" c.1984??
Security Concerns in the Financial Services Industry
Security Concerns in the Financial Services Industry
Ancient computer humor - The Condemned
Ancient computer humor - Memory
Ancient computer humor - Gen A Sys
Ancient computer humor - DEC WARS
Remove the name from credit cards!
Test and Set (TS) vs Compare and Swap (CS)
Golden Era of Compilers
Golden Era of Compilers
Golden Era of Compilers
Golden Era of Compilers
Ancient computer humor - The Condemned
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercompu
Price of core memory
Logo (was Re: 5-player Spacewar?)
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
any 70's era supercomputers that ran as slow as today's supercomputers?
JFSes: are they really needed?
JFSes: are they really needed?
JFSes: are they really needed?
Test and Set (TS) vs Compare and Swap (CS)
any 70's era supercomputers that ran as slow as today's supercomputers?
First Workstation
Converting Bitmap images
mail with lrecl >80
commodity storage servers
IBM mainframe reference online?
Q: Merced a flop or not?
Test and Set (TS) vs Compare and Swap (CS)
Test and Set (TS) vs Compare and Swap (CS)
commodity storage servers
Simulation Question
Test and Set (TS) vs Compare and Swap (CS)
Test and Set (TS) vs Compare and Swap (CS)
Test and Set (TS) vs Compare and Swap (CS)
Test and Set (TS) vs Compare and Swap (CS)
FREE X.509 Certificates
HMC . . . does anyone out there like it ?
FREE X.509 Certificates

Anybody remember the wonderful PC/IX operating system?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anybody remember the wonderful PC/IX operating system?
Newsgroups: alt.folklore.computers
Date: Thu, 24 May 2001 13:35:52 GMT
Paul Nunnink writes:
Hi All,

going through my computer software collection I've found this wonderfull operating system PC/IX. I remember getting this from someone at IBM as a present. It has a big stamp on it "EVALUATION COPY". That must have been, well, around 1985 I guess. I owned a PC/XT then. My, I was proud, PROUD of it man! Real Unix! A system 'just like the university'. Also I had (and still have) a Lear Siegler ADM5 terminal and a VT100. These could be plugged on the COM ports of the XT and, voila!, a multi-user system.Damn, I still remember the thrill of sitting in obe little room of my flat, while hearing the printer go in the other little room. Nice, huh, being young and naive, after all that is 16 years ago. Boy, time flies.....


mine is long gone ... but wasn't it a gray box that was something like IBM PC/IX ... by Interactive. It was a AT&T III port. Interactive basically did the same/similar port to the PC/RT (for ibm) ... but to the VRM layer rather than directly to hardware (which ibm subsequently heavily modified and called AIX).

lots of extraneous pc/rt references
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/97.html#25 Early RJE Terminals (was Re: First Network?)
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#23 Roads as Runways Was: Re: BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#65 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000c.html#4 TF-1
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000d.html#65 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2000f.html#13 Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?
https://www.garlic.com/~lynn/2000f.html#74 Metric System (was: case sensitivity in file names)
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#12 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001e.html#55 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Anybody remember the wonderful PC/IX operating system?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anybody remember the wonderful PC/IX operating system?
Newsgroups: alt.folklore.computers
Date: Fri, 25 May 2001 00:39:08 GMT
Paul Nunnink writes:
Yes, a kind of blueish dark colored cardbooard box with a three ring binder holding the diskettes in plastic jackets, three a piece. Om the box there's a white colored vase with a rose in it. One of these days I'll see if I can get it scanned in. B.T.W. The official name is:

IBM Personal Computer Interactive Executive:

in an subtitle it says:

by INTERACTIVE systems corporation

<snip>

Wasn't the OS for the RT called OASIS, or something?


The IBM ACIS (academic unit) port of BSD was called AOS ... that was a "native" port to the bare metal. The Interactive port of AT&T was to the VRM "layer" and was called AIX.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Mysterious Prefixes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mysterious Prefixes
Newsgroups: bit.listserv.ibm-main
Date: Fri, 25 May 2001 14:43:33 GMT
Kees.Vernooy@KLM.COM (Vernooy, C.P. - SPLXM) writes:
Escept for HASP: it comes from JES's former name: HASP (Houston Automatic Spooling and Priority), given by the original designers in Houston (a company doing something with traveling to the moon and so).

IBM SEs at the account ... Simpson, Crabtree, et all

& From ... the yellow rose of texas

T'was a system down in Houston In trouble, plain to see Its hardware was not running Could no one set it free? And then they vowed to save it Some men of Houson fame The goal was versatility And HASP the program name.

only slightly related:
https://www.garlic.com/~lynn/2001e.html#51

totally random refs:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/96.html#9 cics
https://www.garlic.com/~lynn/96.html#12 IBM song
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#9 Old Vintage Operating Systems
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#29 Drive letters
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/99.html#76 Mainframes at Universities
https://www.garlic.com/~lynn/99.html#77 Are mainframes relevant ??
https://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#109 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#113 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#117 OS390 bundling and version numbers -Reply
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc.
https://www.garlic.com/~lynn/2000.html#55 OS/360 JCL: The DD statement and DCBs
https://www.garlic.com/~lynn/2000.html#76 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000c.html#18 IBM 1460
https://www.garlic.com/~lynn/2000c.html#20 IBM 1460
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#36 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2000d.html#44 Charging for time-share CPU time
https://www.garlic.com/~lynn/2000d.html#45 Charging for time-share CPU time
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#58 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#71 HASP vs. "Straight OS," not vs. ASP
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#7 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Oldest program you've written, and still in use?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Oldest program you've written, and still in use?
Newsgroups: alt.folklore.computers
Date: Fri, 25 May 2001 19:44:56 GMT
"Andy van Tol" writes:
I wrote a real-time pulmonary function testing and reporting program in early 1979 (CP/M on Multibus), ported to PC-DOS in 1981, it's still doing all of the above. I really wouldn't mind if it "went away"....many times I thought it had, but...

I know others have similar experiences, so let's hear 'em... Who's the record-holder for oldest daily-running app?

Andy van Tol


Two days ago somebody forwarded me a performance problem that has been observed when large number of (different) linux concurrently on the same mainframe.

It just so happened last weekend I had stumbled across a detailed description of fix to the problem that I had written 20 years ago ... which was forwarded to the interested parties (somebody in the chain of this particular peculiar sent of circumstances made some observation about the loss of institutional memory). The particular documentation included a solution that I had developed over ten years prior to that (something like 32-33 years ago).

Different event in the early 70s ... I had contributed a significant amount of custom kernel modifications. Mine & other modifications were put together in a packaged kernel for internal use. However a copy of the package was leaked to one or two outside corporations. One was AT&T longlines (someplace in NJ that started with a P that I never learned how to spell and Kansas City). Ten years later, somebody from the marketing office responsible for longlines tracked me down (a couple position changes and moved to the opposite coast).

Turns out when nobody was looking the thing had proliferated and had been ported to various new generations of mainframes over the years ... but it wasn't going to be practical to port to the next latest & greatest mainframe generation ... and the salesmen really wanted to continue selling the latest hardware. In any case, they wanted to know if i could come out and spend some time with longlines ... effectively helping eradicate this weed that I was in large part responsible for letting take root 10 years earlier.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

some VLIW (IA-64) projections from January, 1999...

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: some VLIW (IA-64) projections from January, 1999...
Newsgroups: comp.arch
Date: Sat, 26 May 2001 00:37:03 GMT
Paul DeMone writes:
And Californians short of water should ask why the U.S. Army Corp of Engineers digs irrigation canals to subsidize millionaire farmers growing rice and cotton in this "freaking desert". (and then protectionist congressman have the nerve to make rude noises about trade reprisals against Canadian softwood lumber because they think our government's stumpage fees are too low and thus a subsidy)

i thot i read something that the growers north of the delta could get water at something like ten cents on the dollar. with a deal like that, maximizing the opportunity would be to grow some water intensive crop like rice. I believe that article said that the rice, water intensive business didn't exist 20 years ago ... but with the deal being offered it was too good to pass up.

This is somewhat like the line ... that if the value of a gal of gas is on the order of $20 ... but the price is only in the range of a dollar or two ... that a lot of people might be able to take advantage of the difference between the price and the value to carve out a comfortable life style (however, if they really fine-tune the optimization of difference between the price and the value ... and the gap narrows ... then they are likely to experience some discomfort).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Emulation (was Re: Object code (was: Source code - couldn't resist compiling it :-))

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Emulation (was Re: Object code (was: Source code - couldn't resist compiling it :-))
Newsgroups: alt.folklore.computers
Date: Thu, 24 May 2001 02:53:35 GMT
Jim Thomas writes:
When did this use of "simulation" start? Before computers there was "modeling". Was stuffing a model into a wind tunnel and watching called "simulation"? Was setting up a model on an analog computer called "simulation"?

I did use a program ("software emulator"?) from IBM to simulate a 650 on a 1410 in 1967, and I think it was at least 2 years old then.

[ObM$jab: Does this mean you cringe every time you hear M$ Windoze called an "operating system"? :]

Jim


university had an accounting job that original ran on a 407. It then went thru some sort of 1401 autocoder(?) phase that had an emulation for the 407 plug-board .. and they were able to run the program there. They then got a 360 and some program that translated from autocoder(?) to 360 cobol. The interesting thing was the program still spit out on the printer the 407 sense switch settings. This was an administrative application that ran production every day.

One day ... the ending print-out had some values in the ending 407 values that nobody had seen before. After some amount of consultation, and not finding anybody that had the faintest idea what it all met; the decision was to run the job again and see if it did the same. The 2nd time it ran (was nearly an hr each time) the same results came out and they decided ... oh well ... we'll just forward the output and see if anybody compalines.

somewhat unrelated ... the 407 was still around in student keypunch room with the plugboard set up for simple 80x80 print-out (i.e. students could stick their cards into the 407 and get a printed listing). As far as i knew, nobody was still around that was ever involved in any of the original 407 applications &/or knew how to program the plug-board.

random ref:
https://www.garlic.com/~lynn/99.html#137

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Oldest program you've written, and still in use?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Oldest program you've written, and still in use?
Newsgroups: alt.folklore.computers
Date: Sat, 26 May 2001 14:29:59 GMT
jmfbahciv writes:
Only gods can find the fix before the problem gets reported ;-).

possibly more a sign of somewhat limited intelligence to still be involved in fixing bugs for 35 years. if you are around long enuf you get to see the same bugs again and again (and maybe again and again and again ...).

Similar to the issue of institutional memory ... the joke about computer science having a complete mind wipe every five years ... so they get to re-invent everything over and over ... including the same bugs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Oldest program you've written, and still in use?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Oldest program you've written, and still in use?
Newsgroups: alt.folklore.computers
Date: Sat, 26 May 2001 18:55:03 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
That's twice in the past week that a posting has mentioned automated translation of Autocoder to COBOL...and although I used an IBM-provided conversion program to do that back in the late 1960s I'm coming up with a blank when I try to recall its name.

Part of the problem is that my mind keeps popping up the name "SIFT", but that's not it. SIFT (SHARE Internal (?) FORTRAN Translator) was used to translate bewteen FORTRAN II and FORTRAN IV.

Can anyone supply the name of the Autocoder-to-COBOL translator?

Joe Morris


I never used it ... but a little searching w/altavista
32. IBM. 1400 Autocoder to COBOL Conversion Aid Program. (360 A-SE-19x), Version 2 Application Description Manual, (GH29-1352-2), White Plains, N.Y. IBM 1967.

Autocoder to Cobol Conversion Aid Program, 1967

Housel reported on a set of commercial decompilers developed by IBM to translate Autocoder programs, which were business data processing oriented, to Cobol. The translation was a one-to-one mapping and therefore manual optimization was required. The size of the final programs occupied 2.1% times the core storage of the original program [Hous73].

This decompiler is really a translation tool of one language to another. No attempt is made to analyze the program and reduce the number of instructions generated. Inefficient code was produced in general.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Theo Alkema

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Theo Alkema
Newsgroups: alt.folklore.computers
Date: Sun, 27 May 2001 16:20:26 GMT
"Jim Mehl" writes:
I recently heard that Theo Alkema died last fall some time. For those of you with an IBM VNET background, he would probably be familiar.

Jim Mehl


9/17/2000 RIP.

My dealings with Theo Alkema and Bert Wijnen date back to (at least) when they were supporting the Uithoorn HONE system in Europe. Bert is still going strong at Lucent (people active in IETF meetings will be familiar with him, co-AD for OPS/Network Management).

Theo was also the author of IOS3270, FULIST, and BROWSE.

More people may be aware of the PC port of the above (done at IBM SJR and made available thru one of the IBM software productivity offerings).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Theo Alkema

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Theo Alkema
Newsgroups: alt.folklore.computers
Date: Sun, 27 May 2001 17:03:07 GMT
"Jim Mehl" writes:
I recently heard that Theo Alkema died last fall some time. For those of you with an IBM VNET background, he would probably be familiar.

Jim Mehl


some random stuff from someplace ...

In CP/67, there was special support for "named systems" as part of simulation of the IPL command. Basically, memory images of virtual storage could be saved to a special CP location and a virtual memory could be refreshed to the saved image by using the IPL command using as a parameter, the name of the saved location.

Besides, specifying which virtual pages were saved, it was also possible to specify that specific virtual pages were to be shared (R/O) across all virtual addresses pages loading the same named system. For CMS this was a half dozen or so virtual memory pages (performance benefit that all virtual address spaces didn't require private copies of these pages).

The transition from CP/67 to VM/370 changed the implementation so that sharing was done on a (64k byte) shared segment basis. CMS for VM/370 re-organized its internal structure to increase the number of shared (4k byte) pages to 16. Since these were R/O, the code/program located in shared (R/O) memory precluded being the target of store insturctions. Traditionally this had been referred to as "reentrant" program.

In the late CP/67 time-frame, for internal corporate use, I had done a "paging access method" for the CMS filesystem as well as introduced an additional method of loading virtual storage images (besides the simulated IPL command which had several unwanted side-effects, like it reset all of virtual memory, precluding multiple, different, concurrent "named" areas in the same virtual memory). The target of this new method could be either a set of pages in a CMS PAM filesystem or an existing CP "named system" area. In addition, I reworked some additional CMS system functions so that they could reside in additional CMS system "virtual memory" (initially a second 64kbyte shared segment, in addition to the standard, single CMS system 64kbyte shared segment).

This was widely deployed internally inside the corporation on a Release 2 VM/370 base. It was used extensively by all the world-wide deployed HONE system for (at least) the APL interpreter running under CMS, i.e. CMS w/shared segments could be "IPLed" and then the user could invoke the APL interpreter which would be loaded with 4-5 shared segments. This allowed a HONE application to (transparently) switch back and forth between compute intensive Fortran applications and the APL interpreter environment.

A subset of the CP function (only new method for "named system" loading, but not any of the paged-mapped filesystem) and some amount of the CMS function (i.e. only system function rewrite to make in re-entrant and reside in an additional shared segment was picked up by the product group and released with VM/370 Release 3.

In a typical CMS environment, IOS3270, FULIST, and BROWSE would be loaded dynamically into standard virtual storage. As the following reference, I worked with Theo to modify IOS3270, FULIST, and BROWSE to be re-entrant so that they could be included in a CMS shared-segment (i.e. instead of private version of the code appearing in every CMS virtual address space, a single, common copy of the code was shared across all CMS virtual address spaces).

Date: 10/10/78 18:51:49
To: wheeler

Lynn,

Once upon a time FULIST was reentrant, but as i needed space and didn't think it would ever run in shared memory i took it out again. I am currently rewriting the thing to make it undestandeble for others (and for myself i must admit), to fix, or at least circumvent the decimal data exeption you get if you don't stick to the rules, to change the sort algorithm to speed things up, and last but not least to include NDS support.

The state it is in now will leave you now room to work on as it is just FULL. Nevertheless i will ship it to you so you can see what a mess it is. Let me hear what you're doing to it. It is in the process of becomming an FDP/IUP, as well as IOS3270 and possibly BROWSE. Won't release it though untill it is reworked.

Regards-Theo Alkema-HONE System Support-Uithoorn-Netherlands


... snip ... top of post, old email index, HONE email

Date: 10/11/78 17:30:11
To: wheeler

NDS support for IOS3270 is already done (not tested as i don't have them) Will run on REL5.LTR7 and up (i hope).

FULIST2 will take some time as i am very busy working on a securety system. If you are running the required release i can ship you a copy to play around with (IOS3270)

Regards-Theo Alkema-HONE System Support-Uithoorn-Netherlands


... snip ... top of post, old email index, HONE email

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CMS "PAM" (paged-mapped) filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

5-player Spacewar?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5-player Spacewar?
Newsgroups: alt.folklore.computers,rec.games.video.classic
Date: Sun, 27 May 2001 21:36:11 GMT
Kirk Is writes:
classicgaming.com selected Spacewar! as their game of the week, so it revived my interest in it. (You can see my blog entry at
http://kisrael.com/viewblog.cgi?date=2001.05.26 )

Anyway, one of the most interesting new things my tiny bit of research discovered was this article that the wheels.org site posted, from Rolling Stone: http://www.wheels.org/spacewar/stone/rolling_stone.html

It mentions a 5 player variation on Spacewar, presumably with 5 distinct ships-- I assume its those five shipforms that map to the names "Pointy Fins","Roundback","Birdie","Funny Fins",and "Flatback"


summer of 1980, the author of REXX wrote/released (for the internal network) a multi-player, distributed (network) space war game played on 327x terminals (players could be logged into the same machine or different machines around the network).

One of the first "bug-fixes" to the game was energy penalty inversely proportional to the time interval between commands after somebody wrote an automated program to play the game.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Climate, US, Japan & supers query

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Climate, US, Japan & supers query
Newsgroups: comp.sys.super,comp.arch,alt.folklore.computers
Date: 27 May 2001 16:07:28 -0600
mccalpin@gmp246.austin.ibm.com (McCalpin) writes:
There are other specific cases. Convex had a very solid business- oriented philosophy, and they probably could have stayed independent if they had not botched the transition to RISC machines. The Convex SPP series of boxes suffered from excessive latency, and so were unable to deliver on the bandwidth and overall performance that the infrastructure appeared to be set up to deliver. I am certainly not claiming that these machines were a total failure, since HP shipped several hundred million dollars worth of SPP follow-ons last year, but since the acquisition of Convex by HP, they are clearly not a "supercomputing" company any more.

Note in the following posting "HARRIER" is internal code name for 9333 which eventually turned into SSA standard (random ref:
https://www.garlic.com/~lynn/95.html#13)

Date: Tue, 30 Jun 92 15:39:48 -0700
From: wheeler
Newsgroup: SCI

The sci meeting at slac today included people from slac, two from hp, one from apple, ibm branch rep, somebody from IBM Houston, my wife Anne, and I.

The ibm branch rep turns out to really be an ibm "business" partner that ibm has turned the slac account over to. this guy also mentioned that he has the nasa/ames account.

Gustavson (SLAC & IEEE SCI committee chairman) gave introduction to SCI and talked about possible application. He had reprints of the IEEE Micro article "The Scalable Coherent Interface and Related Standards Projects".

As an aside, I was recently reviewing cache coherency papers in 19th proceedings of sigarch ... & had ran across the sci ring paper ... and brought it along. Nobody at the meeting had been aware that it was published.

Also it turns out that the ring architecture model (Figure C in the feb. 92 IEEE Micro article) is almost identical to the ring insertion patent that Anne received in '78. Also the dual simplex architecture is the same as the HSDT work we were doing in the early '80s (also see the "well-worn" HA/6000 technology).

The person that was suppsoed to be there from Convex didn't make it. It was re-iterated that Convex has signed an agreement with HP to use the PA-RISC chips for its new "supercomputer" ... and it will be implemented using SCI for distributed shared memory. The architecture assumes some sort of relaxed consistency cache protocol (for recent references also see sigarch proceedings #19, there are three papers in session 1, also see the DASH prototype paper from session 3).

Gustavson outlined two possible design points for SCI, one using "low-cost" rings for workstation type environments and the other with a switch for highly-parallel supercomputers.

There was some discussion with regard to how RAM/SCI implementation compares to technology like RAMBUS. He mentioned talking to somebody (that I believe is doing a RAMBUS implementation) that suggested RAM/SCI access is still a good technology to persue. RAMBUS is pretty well optimized to the limit supporting just 500mbytes/sec (say with 4-way interleaving: 2gbytes/sec). RAM/SCI starts out at 1gbyte/sec and has room to grow.

There was also mention that SCI was recently presented to the SCSI standards committee and a SCSI protocol using 200mbit (maybe 100mbit) SCI cable looks very promising. This appears to be along the same limes ase HARRIER-II serial implementation running 80mbits (pushing to 160?). One of the comments was that as the SCSI drives get smaller, the current SCSI connector is larger than the drive. SCI connector is significantly smaller and provides for higher-bandwidth.

There was a presentation on SLACs computational and data-storage requirements over the next 3-5 years ... and how SCI would being to efficiently address some of the opportunities. They are planning on experiments that are monitored by some front-end real-time data-reduction machines. These machines will produce an average of approximately 100 "events"/sec with about 25kbytes/event (2.5mbytes/sec). These events then require subsequent process to the tune of approximately 2500Mips per second. This additional processing will eventually result adding approximately 10kbytes/event (35kbytes/event total). Effectively 2.5mbytes/sec input, 2500Mips/sec processing, 3.5mbytes/sec output. Aggregate yearly storage requirements is on the order of 15TB/year.

Looks like SLAC is looking for government funding along with a partner from industry ... possibly something along the lines of the Kung/CMU/NSC project ... but directed to exploring high, sustained effective datarates for distributed environment using distributed shared memory paradigm.


... snip ... top of post, old email index, HSDT email

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

5-player Spacewar?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5-player Spacewar?
Newsgroups: alt.folklore.computers,rec.games.video.classic
Date: Mon, 28 May 2001 04:15:53 GMT
Kirk Is writes:
time interval between commands? You mean the "autopilot" would tend to 'micromanage'? Or react too quickly somehow? Anyway, I don't get what this patch prevented, or why a clever countr-patch couldn't be issued to re-enable some level of cheating without that penalty.

automated program would issue commands significantly faster than a human could/would and therefor defeat everybody.

individual players didn't have direct control of the game code ... they only interfaced to it thru commands (just the user interface). some enterprising person wrote a program that simulated the user interface ... but issued commands and re-acted significantly faster than a human would.

basically, the patch didn't prevent "robot" players operating at super-human speed ... they were just penalized as to the amount of energy used per operation (somewhat attempt to place "robot" players on level playing ground with "human" players).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

5-player Spacewar?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5-player Spacewar?
Newsgroups: alt.folklore.computers,rec.games.video.classic
Date: Mon, 28 May 2001 04:32:45 GMT
Kirk Is writes:
Ahh, neat. X-tank and the like followed I'm sure. Way too late for what the Rolling Stone article is talking about, but still neat to hear. I fear a really chronology of Spacewar! may not be possible now.

slightly earlier ... the PDP1(?) version was ported to 1130/2250-4 (2 player) sometime '68/'69 at cambridge science center (545 tech. sq).

After I joined CSC early in 1970, I remember bringing my kids in on weekends and letting them play it

random refs:
https://www.garlic.com/~lynn/97.html#2 IBM 1130 (was Re: IBM 7090--used for business or science?)
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

5-player Spacewar?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5-player Spacewar?
Newsgroups: alt.folklore.computers,rec.games.video.classic
Date: Mon, 28 May 2001 16:46:10 GMT
Kirk Is writes:
Still, it sounded like you were saying that a robot was flushed out by having a short 'time interval between commands'-- i.e. the smaller the time interval, the bigger the energy penalty.

Does that mean time interval between subsequent commands issued by the player (i.e. the 'bot' tended to micromanage the direction and thrust of the ship) or the time interval between some stimulus and the player's response?

In both cases it seems like, once you have the basic "how do I make a good spacewar playing program" problem solved, you could tweak it's algorigthm to not be penalized by the anti-bot code.


since the game didn't really know whether it was a human or 'bot (in the game) ... energy required to execute a command (movement, attack, firing, etc) was set value unless the interval between two successive commands was less than a threshold (lower than most real humans could reasonably be expected to accomplish) ... then the energy consumed for the subsequent command started to increase inversely proportional to the interval between the commands.

yes, 'bots were modified to take into account the threashold when playing the game (i'm not sure how sophisticated they got ... whether they just stayed right at the threshold ... or had some strategy to execute under the threshold under particular conditions).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Medical data confidentiality on network comms

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Medical data confidentiality on network comms
Newsgroups: comp.security.misc,sci.crypt
Date: Mon, 28 May 2001 17:01:19 GMT
Kilgallen@eisner.decus.org.nospam (Larry Kilgallen) writes:
But some of them are susceptible to cryptographic controls. Consider the issue of delegation. My doctor can see my medical records. My doctor should be able to delegate the ability to see those records to a specialist for a limited amount of time, but without delegating unlimited rights to further delegation. Some number of emergency room doctors should be able to unseal my records in the absence of my doctor if they all agree and the access is strongly audited (alarmed) with guaranteed notification to my doctor and me. These are all issues where there might be some cryptographic assistance as part of the total solution.

cryptographic controls tend to be all or nothing ... you either see it or you don't see it.

fine-grain access control systems with audit procedures can have real-time rules and audit trail as to which entities can see what, when. however, for the most part, cryptography is almost orthogonal to fine-grain access ... except possibly in the area of authentication (used in conjunction with access control ... aka authentication and permissions being different issues ... authentication can be addressed as a "data" paradigm and real-time permissions addressed as procedure/rule paradigm).

effectively the fine-grain access control system would be "online" with all the real-time rules, exceptions, escalation, permissions, etc.

bulk-encrypting all of the data and only providing the key(s) to the access control system could be a means to address various kinds of system exploits (like off-site disaster/recovery copies).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wanted other CPU's

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wanted other CPU's
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 29 May 2001 09:50:10 GMT
torbenm@diku.dk (Torben AEgidius Mogensen) writes:
- Several LISP processors were designed. However, most of these didn't run LISP directly but had mostly traditional ISA's with extra instructions for supporting LISP.

Additionally, many research prototypes or designs have been made for various virtual machines, including graph-reduction machines.

Torben Mogensen (torbenm@diku.dk)


misc. other, many of the ibm 360s "microcoded" engines had special microcode that emulated previous generation of 7090/140x.

the 360 model 50 had optional special microcode supporting PLI for CPS ... an online, PLI-based interactive system that ran on the 360/50.

there was special microcode developed for 370 145/148 that supported APL.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Accounting systems ... still in use? (Do we still share?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Accounting systems ... still in use? (Do we still share?)
Newsgroups: alt.folklore.computers
Date: Tue, 29 May 2001 12:31:14 GMT
jmfbahciv writes:
No. That's not the reason CPU time isn't charged. The reason is reproducibility of charges. CPU time is a very difficult thingie to keep consistent for a job run (I think since VM addressing began to be used). An audit required cross-charges to be exactly the same for a run at different times. On a timesharing system, runtime was very hard to keep track of on behalf of each user.

/BAH


vm/370 did a fairly good job with the 370 high resolution timer keeping track of cpu used (both user-mode and kernel-mode). the problem for reproducability on large cache multi-tasking machines (timesharing or batch) was effect of concurrent interrupts (or other task-switching events) on cache-miss. A user's job could see a 30-40 percent difference in cpu time between running while machine had little or no concurrent i/o and high rate of concurrent i/o.

Today's generation of mainframes get even more interesting with potentially two levels of VM ... one in the microcode providing "hardware" LPARS (logical partitions) and then possibly VM running in LPARS providing software virtual machines.

My observation of lack of high resolution timers on most of the UNX and similar hardware platforms ... as a result the traditional accounting method was to sample ten to hundred times per second what was running and charge them for the elapsed time.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Accounting systems ... still in use? (Do we still share?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Accounting systems ... still in use? (Do we still share?)
Newsgroups: alt.folklore.computers
Date: Thu, 31 May 2001 04:47:13 GMT
jmfbahciv writes:
Sure. We considered this. You can do all kinds of kinky things. However, do you want to expend 100% of your CPU keeping track of what your users are doing or do you want to furnish CPU time to your users? Our philosophy was to furnish as much time as possible to the user. But we did provide hooks if the customer really, really thought that every little itty bitty thing had to be tracked.

because of the design of the timers for this purpose on 370, it only took two instructions per switch (user->kernel, kernel->user); well under 1% of the pathlength of the nominal pathlength associated with whatever function/feature causing the switch to occur.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

offtopic: texas tea (was: re: vliw)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: offtopic: texas tea (was: re: vliw)
Newsgroups: alt.folklore.computers
Date: Wed, 30 May 2001 01:22:15 GMT
hawk@fac13.ds.psu.edu (Prof. Richard E. Hawkins) writes:
Last year's problem in California related to two of the thre refineries making a certain blend having fires (against a backdrop of sheer stupidity by the government in setting the formulation, but that's another story--there's lots of those about the california government :). If your short term supplie is reduced from 3M barrels/day to 1M, while the same number of drivers remain on the road, you have two choices: 1) price goes throuh the roof 2) shortage.

and choice was not all that different than what a lot of internet IPOs selected.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

VM-CMS emulator

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VM-CMS emulator
Newsgroups: alt.folklore.computers
Date: Thu, 31 May 2001 13:41:08 GMT
"Andrew McLaren" writes:
The company I was working for at that time wanted Unix and big iron, not an easy combination in 1990. Hence our brief excursion into AIX/370. Imagine a cross between TSO and the Unix shell ... terminal output didn't scroll past, you had to hit <enter> every 24 lines to see the rest of your output. Block mode terminals. Sort of a compulsory 'more' on every command ;-) TCP/IP was unusable; and great swathes of standard Unix APIs were missing. Everything seemed to be in EBCDIC. I was very surprised to find, some years later, that AIX on RS/6000 had actually become a very good Unix implementation.

the first excursion they did for unix on mainframe was the adoption of AT&T unix as a subsystem on TSS/370 that saw large deployment inside AT&T.

The next was going to be BSD ported to 370 ... but the group got diverted before the product delivered to doing a BSD port to the PC/RT (which became AOS ... as an alternative system to the Interactive port of AT&T to PC/RT that was called AIX).

In some sense ... the (UCLA) Locus port to mainframe (along with the port to PS/2) ... resulting in AIX/370 and AIX/PS2 was to show integration of the mainframe/PC world (client/server?) since Locus provided quite a bit of support for location transparency (file caching as well as process migration, multiple networked machine process operation, etc).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Theo Alkema

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Theo Alkema
Newsgroups: alt.folklore.computers
Date: Thu, 31 May 2001 04:37:29 GMT
"John Lynn" writes:
I remember Theo from the good ol' VM Internal Technical Exchange (VMITE) in California every year for internal IBM VM folks. I knew who he was and what he had done, having even spent many hours pouring over the BROWSE source, trying to learn things as a young pup. I remember standing near where Theo was talking, trying to act causal and catch a bit of the conversations of the masters...

Didn't Bert also have some sort of amazing DASD-related tool he had written? I can't quite remember what it did... darn!


as part of single system image in a large (cluster/loosely-coupled) processor complex he defined a CKD CCW sequence for effectively doing compare&swap operation as part of serialization disk operations w/o having to do reserve/release.

This was used initially for the major, large clustered HONE operations around the world (at least initially Uithoorne and Palo Alto) supporting "single system image".

random refs:
https://www.garlic.com/~lynn/subtopic.html#hone
https://www.garlic.com/~lynn/2001e.html#73

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Early AIX including AIX/370

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Early AIX including AIX/370
Newsgroups: alt.folklore.computers
Date: Fri, 01 Jun 2001 13:01:28 GMT
Lars Poulsen writes:
I forwarded this piece to one of my coworkers who worked at Locus during that time, to ask for his comments. He did not want his name brought forward, but this is what he said:

There's a lot of stuff they don't know about. What they don't know about is the pissing contest between the two divisions of IBM; the Palo Alto(BSD) and Austin(SVID) groups. Then, ultimately, the decision by Chm. Akers to only have one Unix product; Austin won, but, Palo Alto wasn't done throwing wrenches into the machinery. Then Yorktown Heights and Boeblingen sp?) got into the fracas.


note that the ykt was involved early because of 801, cpr, PL.8, etc. The Austin project originally started out as a joint ykt/austin closed romp/801 as a displaywriter follow-on in the office products division using ykt cpr (for 801/romp & written in pl.8). when that project got canceled, the resources was retargeted to "unix" ... still using romp/801 and the ykt/aus resources going into a building the "vrm" (written in pl.8) ... basically managing the metal ... and interactive doing the svid port to a vrm abstraction layer. I was in some of the early VRM meetings.

part of the tss/370 group supporting the AT&T unix activity were in germany and working on making it a generalized product.

starting the pa/370/bsd, they tapped a guy out of the stl/apl group to go to palo alto to manage the project. I got called in the first week he showed up to participate in the effort. at that time, the palo alto group already had an ongoing project with UCLA and had locus running on S/1, some 68k machines and PCs.

one might might be tempted to characterize the ykt/aus effert as putting a proprietary stamp on some product offering (which at that moment happened to have some unix content) ... while the other efforts were much more oriented towards offering ("some" standard) unix offering on a company hardware platform.

It really got interesting when you took all the above (aus, locus, ucla, pa, bsd, ykt, etc) and then included various CMU (mach, afs) in the same room working on a "converged" distributed/network file system.

random other refs:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#65 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000b.html#5 "Mainframe" Usage
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000d.html#65 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2001.html#44 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2001f.html#0 Anybody remember the wonderful PC/IX operating system?
https://www.garlic.com/~lynn/2001f.html#1 Anybody remember the wonderful PC/IX operating system?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

MERT Operating System & Microkernels

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MERT Operating System & Microkernels
Newsgroups: alt.folklore.computers
Date: Fri, 01 Jun 2001 21:22:20 GMT
crosby@nagina.cs.colorado.edu (Matthew Crosby) writes:
Anyway, there is a paper on the MERT Operating System in the 78 one which was interesting reading. I'd never heard of MERT before. It looks like a micro-kernelish RT OS that can run Unix as a server on top, which is interesting--forshadowing Mach and the like.

Was this just a Bell Labs internal research thing? Am I wrong in characterising it as having micro-kernel characteristics? (And what would be the earliest micro kernel anyway?) Is there somewhere I can see more information on this?


one would be tempted to claim that cp/67 was one of the original micro-kernels that allowed other stuff to be run "on top". the current incarnation as vm???? being able to run 40,000+ some odd copies of Linux is hardly a micro-kernel anymore. however, the flavor that morphed into the microcode of the current machines that provides the LPAR support ... aka large number of current mainframes run the operating systems in LPARs ... one additional level removed from the "real" hardware.

note also ... unix running on a tss/370 kernel saw large deployment inside at&t and there have been numerous instances of unix (from a number of different vendors) deployed on various VM-based platforms over the years.

other microkernel candidates would be pieces of RSCS/VNET that managed networking for cp/67 & VM/370 (and the internal network). I remember hearing somebody claim (sometime within the past 10 years or so) that one of the current popular real-time systems ... for at least one of the core components (written in C) reads line-for-line the same as one of the core components from RSCS (written in 360 assembler) except for the differences in the language ... the logic is the same and the comments track statement for statement down to the same mis-spellings.

https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again . . .
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001b.html#41 First OS?
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Question about credit card number

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about credit card number
Newsgroups: sci.crypt
Date: Sat, 02 Jun 2001 14:28:43 GMT
Chenghuai Lu writes:
Even if the CC numbers are stored in an encrypted form in the back ends, they are easy to break since all of CC numbers are encrypted using the same master key. Isn't it right?

note that a master file of CC transactions containing CC numbers is likely to be in constant use ... adding new transactions, various administration operations against transactions in progress, other types of transaction reference operations. bulk encrypting/decrypting such a file on every operation would quickly become cumbersome. Even two level file where transaction level detail has an obfuscated CC number with CC-mapping in 2nd bulk encrypted file also becomes cumbersome (basically the file exists because a lot of business processes are using it).

part of the issue goes to authentication and access control ... similar to the medical data thread in this n.g.

an alternative is the financial industry's electronic payment object for all account-based transactions standard ... X9.59 (various refs: at https://www.garlic.com/~lynn/) where account numbers used in (authenticated) x9.59 transactions are defined to not be usable in non-authenticated transactions (i.e. harvesting of "x9.59-related" account numbers doesn't provide a lot of fraud benefit since they can't be used in non-authenticated, non-x9.59 transactions).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Question about credit card number

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about credit card number
Newsgroups: sci.crypt
Date: Sat, 02 Jun 2001 16:39:37 GMT
Anne & Lynn Wheeler writes:
an alternative is the financial industry's electronic payment object for all account-based transactions standard ... X9.59 (various refs: at https://www.garlic.com/~lynn/) where account numbers used in

there is some class of current unauthenticated transactions that depend on account numbers and/or other customer related information .... effectively turning those items into shared-secrets ... aka just knowing such a shared-secret allows fraudulent transactions to be performed ... the attraction in harvesting of CC#s is just one example.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Price of core memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Price of core memory
Newsgroups: alt.folklore.computers
Date: Sat, 02 Jun 2001 21:02:38 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
The MFT nucleus was only 28K (up from 26K when we upgraded to version 14), meaning that in 10K we got extra features like multiple partitions [1], asynchronous console support and error recovery, and the like.

part of my share '68 presentation on MFT 14 and CP/67 performance optimization for a 768k 360/67.

https://www.garlic.com/~lynn/94.html#18

The kernel was expanded to 82k bytes with various resident options. The job scheduler was defined as 100k bytes (aka it needed 100k region in order to start a process/job step).

HASP was gen'ed at 118k bytes using 1/3rd 2314 tracking buffering.

Effectively 82kbyte kernel plus resident HASP 118kbytes resulting in 200kbyte fixed storage requirements.

That left 768kbytes-200kbytes ... 568kbytes for a few regions.

in something similar to the above work ... a recent thread on job step processing overhead ("estimate jcl overhead")
https://www.garlic.com/~lynn/2001e.html#60
https://www.garlic.com/~lynn/2001e.html#61
https://www.garlic.com/~lynn/2001e.html#68

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Design (Was Re: Server found behind drywall)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Design (Was Re: Server found behind drywall)
Newsgroups: alt.folklore.computers
Date: Sat, 02 Jun 2001 21:08:04 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
There was only one problem:

The generator was in the basement. The fuel tank was in the sub-basement. The pump used to bring fuel to the generator was an electric motor.

Oops.


then there is the one about the site that spent something like $5m on diesel generator configuration ... but never bothered to do the monthly tests ... so when it finally came around to needing it ... it wouldn't start ... there was a lot of corrosion and other problems (and the whole thing had to be scrapped and replaced).

I believe normal emergency testing requirements start with something like switching to it for one hr a month.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's "VM for the PC" c.1984??

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's "VM for the PC" c.1984??
Newsgroups: alt.folklore.computers
Date: Sun, 03 Jun 2001 12:27:39 GMT
"Andrew McLaren" writes:
I have never heard or seen any other references to this "CPx86" operating system from IBM; or anything regarding the machinations of its rise and fall (although, I did use a PC/3270 during the 1980s). So - can anyone confirm the story, or provide extra details?

I believe work on cp88 started sometime in 82 or very early 83. It was used as the basis of xt/370 ... on the pc side, getting loaded there when xt/370 function/feature was activate ... aka CP kernel was running on the 370 card ... and cp88 was running on the PC side. Anybody with an xt/at/370 would have had copy of cp88 ... but just possibly thot it was part of the xt/at/370 package that ran on the pc side.

there was some work between PM (presentation manager) and cp88 in early 84.

random refs:
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Question about credit card number

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about credit card number
Newsgroups: sci.crypt
Date: Sun, 03 Jun 2001 12:31:18 GMT
roger@liamsat.com (Roger Fleming) writes:
To be fair, most people indeed cannot remember 8 digit PINs. But they could use passphrases instead, or issue X.509 certs, or at least put in a long delay (and report to security) every 3 errors. All of these, however, require a little work to transfer onto the web from a PIN based system on stateful machines, and work means eroding the bottom line.

... or support x9.59

https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's "VM for the PC" c.1984??

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's "VM for the PC" c.1984??
Newsgroups: alt.folklore.computers
Date: Sun, 03 Jun 2001 15:41:58 GMT
"Andrew McLaren" writes:
Fergus and Morris say that "in 1984 a three-day CP/x86/Mermaid strategy conference was held at Boca Raton, attended by more than 50 technologists and managers. CP/x86 was also very much discussed at the very top of the company in the shorthand 'VM for the PC'". Furthermore "...CP/x86 was so superior [to DOS] that it would almost certainly have become the primary design target". However IBM - for reasons unknown - decided to bypass CP/x86 and start the OS/2 project instead, and "there was a near revolt among the technologists". Possibly rightly so, given that OS/2 later proved to be the "FS Project" of PC operating systems ;-)

there is also the joke that some number of MFT developers moved to Boca and "re-invented" MFT as RPS on the S/1 ... and then "re-invented" it again as OS/2 on the PC. FS was much more of a paper project that was documenting every blue-sky idea that anybody had ever thot of in the history of computing (OS2 might just be considered some number of people that just liked MFT). That is different than the stuff that started out as 88-side multi-tasker and services in support of vm/370 running on a (limited) 370 pc-board (needed to be able to, at least, map between cp/cms 370 IO/device operations and pc-side devices/features).

things like that happen ... see related aix thread in this n.g.
https://www.garlic.com/~lynn/2001f.html#22

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Remove the name from credit cards!

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Remove the name from credit cards!
Newsgroups: alt.security
Date: Sun, 03 Jun 2001 18:43:40 GMT
andrew writes:
No, I don't mean that you should file the name off your cards. I mean that maybe the issuing bank should.

The original idea, before online authentication, was that forging cards was too difficult, and that the merchant could maybe ask for corroborating ID, also too difficult to forge.

I don't think these assumptions are valid any longer. Besides, you might just have stolen the guy's wallet.

So, if the card had no name, when you try to buy something the merchant can use online authentication, ask what your name is and see if it matches. Maybe check an online photo, too.


note that the EU has some sort of (pending?) regulation that says that all point-of-sale/retail transactions need to be as anonymous as cash.

This basically pushes things in the direction of removing name and identification at POS/retail transactions ... the problem is then how would a transaction be authenticated (i.e. in some sense the name is there so that a merchant can verify against other forms of identification; note that this not only aplies to credit cards, but all payment cards, as well as checks).

a solution is something like x9.59 in conjunction with a chip-card (aka x9.59 was designed to be used for all retail account-based transactions ... not limited to credit or debit transactions, and not limited to the internet) for online transaction authentication w/o requiring identity information (note that the various x.509 identity certificate solutions have the similar identity/privacy short-comings as names embossed on payment cards and recorded on the magstripe).

random refs:
https://www.garlic.com/~lynn/aadsm2.htm#anon anonymity in current infrastructure
https://www.garlic.com/~lynn/aadsm2.htm#privacy Identification and Privacy are not Antinomies
https://www.garlic.com/~lynn/aadsm2.htm#mauthauth Human Nature
https://www.garlic.com/~lynn/aadsm2.htm#stall EU digital signature initiative stalled
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
https://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
https://www.garlic.com/~lynn/aadsm5.htm#xmlvch implementations of "XML Voucher: Generic Voucher Language" ?
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aepay2.htm#morepriv [E-CARM] AADS, x9.59, & privacy
https://www.garlic.com/~lynn/aepay2.htm#privrules U.S. firms gird for privacy rules
https://www.garlic.com/~lynn/aepay2.htm#privrule2 U.S. firms gird for privacy rules
https://www.garlic.com/~lynn/aepay2.htm#privrule3 U.S. firms gird for privacy rules
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/aepay3.htm#votec (my) long winded observations regarding X9.59 & XML, encryption and certificates
https://www.garlic.com/~lynn/aepay3.htm#gap2 [ISN] Card numbers, other details easily available at online stores
https://www.garlic.com/~lynn/aepay3.htm#privacy misc. privacy
https://www.garlic.com/~lynn/aepay3.htm#x959risk2 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay3.htm#smrtcrd Smart Cards with Chips encouraged ... fyi
https://www.garlic.com/~lynn/aepay4.htm#privis privacy issues
https://www.garlic.com/~lynn/aepay5.htm#pkiillfit Some PKI references from yesterday's SlashDot
https://www.garlic.com/~lynn/aepay6.htm#harvest2 shared-secrets, CC#, & harvesting CC#
https://www.garlic.com/~lynn/aepay6.htm#dsdebate Digital Signatures Spark Debate
https://www.garlic.com/~lynn/ansiepay.htm#privacy more on privacy
https://www.garlic.com/~lynn/ansiepay.htm#x959bai X9.59/AADS announcement at BAI
https://www.garlic.com/~lynn/ansiepay.htm#theory Security breach raises questions about Internet shopping
https://www.garlic.com/~lynn/ansiepay.htm#scaads X9.59 related press release at smartcard forum
https://www.garlic.com/~lynn/98.html#0 Account Authority Digital Signature model
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/98.html#48 X9.59 & AADS
https://www.garlic.com/~lynn/99.html#165 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#171 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/99.html#216 Ask about Certification-less Public Key
https://www.garlic.com/~lynn/99.html#217 AADS/X9.59 demo & standards at BAI (world-wide retail banking) show
https://www.garlic.com/~lynn/99.html#224 X9.59/AADS announcement at BAI this week
https://www.garlic.com/~lynn/99.html#228 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#229 Digital Signature on SmartCards
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2000.html#60 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2000b.html#40 general questions on SSL certificates
https://www.garlic.com/~lynn/2000b.html#53 Digital Certificates-Healthcare Setting
https://www.garlic.com/~lynn/2000b.html#90 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/2000f.html#72 SET; was Re: Why trust root CAs ?
https://www.garlic.com/~lynn/2000g.html#5 e-commerce: Storing Credit Card numbers safely
https://www.garlic.com/~lynn/2000g.html#33 does CA need the proof of acceptance of key binding ?
https://www.garlic.com/~lynn/2000g.html#34 does CA need the proof of acceptance of key binding ?
https://www.garlic.com/~lynn/2001.html#67 future trends in asymmetric cryptography
https://www.garlic.com/~lynn/2001.html#73 how old are you guys
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#72 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001d.html#8 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001d.html#19 [Newbie] Authentication vs. Authorisation?
https://www.garlic.com/~lynn/2001d.html#41 solicit advice on purchase of digital certificate
https://www.garlic.com/~lynn/2001e.html#26 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#33 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#36 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#15 Medical data confidentiality on network comms
https://www.garlic.com/~lynn/2001f.html#24 Question about credit card number

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"SOAP" is back

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "SOAP" is back
Newsgroups: alt.folklore.computers
Date: Mon, 04 Jun 2001 15:24:48 GMT
never+mail@panics.com.invalid (Michael Roach) writes:
In Seattle I saw electric deisel hybrids that would pop up the pickup when they were on streets strung for trolleybuses. In other areas they would switch over to deisel, probably running a generator to drive the wheels.

when i was a kid ... my dad use to drive one of those for a short while. the overheads ran down 5th ave(?) ... the other problem was that the pickup for the overheads would slip off ... and there was this long wooden pole that the driver used to replace the pickup on the overhead.

the other thing i remember was us kids riding downtown with my mother on bus for shopping. I once got lost in the public market and got taken to a police station. i vaguely remember my mother visiting various small women shoe stores on 5th ave ... one of them was called nordstrom.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's "VM for the PC" c.1984??

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's "VM for the PC" c.1984??
Newsgroups: alt.folklore.computers
Date: Tue, 05 Jun 2001 15:08:25 GMT
"Andrew McLaren" writes:
In their book (op cit) Fergus and Morris also advance the claim that IBM got burnt so badly by FS that it took them a generation to recover. They discuss this over some 20-30 pages, so I won't repro their full argument here ;-) Basically they say that so much energy went into FS that s370 was neglected, hence Japanese plug-compatibles got a good foothold in the market; after FS's collapse a tribe of technical folks left IBM or when into corporate seclusion; and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with sycophancy and make no waves under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat (by the FS failure), hence, while still agressive in business practices, IBM faltered at being aggressive in technology. Hence the languishing of 801, RISC, failure to exploit S/1 ... nothing would be allowed to rock the 370 boat again. Until of course the majot changes of the early 90s.

misc. FS refs
https://www.garlic.com/~lynn/submain.html#futuresys

i was rather caustic at the time ... claiming that it was a case of the immates in charge of the institution ... few if any of the people working on it seemed to ever have supported real-live production system (i.e. like being on call for 24hrs/day ... there was a strong lack of reality finger-feel to it).

there was a cult film playing at the time down in central sq ... having played continuuously for 10+ years ... "queen of hearts"(?) (actually "king of hearts") ... american soldiers entering a french town where all the people had fled except the inmates from local asylum who were wandering around the town.

and of course ... one of the projects I did as an undergraduate is credited with originating the plug-compatible market.

as to OS/2 ... i remember getting calls from boca about all the new things/rewrite that they wanted to do between release 1 and release 2 ... including looking for advice specifically about dispatching and scheduling. i don't remember any specific MFT names that had gone south to boca ... just a number of people joking about the MFT->RPS->OS2 geneology.

pcm/oem refs:
https://www.garlic.com/~lynn/submain.html#360pcm

random FS refs:
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists.
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#40 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Security Concerns in the Financial Services Industry

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security Concerns in the Financial Services Industry
Newsgroups: comp.security.misc
Date: Sat, 09 Jun 2001 15:13:53 GMT
ctl8505@aol.com (CTL8505) writes:
I am currently in the process of writing a paper on security concerns in the financial services industry. If anybody has inputs plz contact me at Katarkia@aol.com

X9 is the financial induustry standards body in the US, TC68 is the equivalent body at the international ISO level. Within X9, X9F specializes in cryptographic and security standards.

X9A specializes in retail payments. The X9A10 working group was responsible for the X9.59 payment object standard ... the requirement given the X9A10 group was preserve the integrity of the financial infrastructure for all electronic retail payments.

other references to X9.59 can be found at

https://www.garlic.com/~lynn/

the above also has glossary & taxonomy for payment, financial, and security areas.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Security Concerns in the Financial Services Industry

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security Concerns in the Financial Services Industry
Newsgroups: comp.security.misc
Date: Mon, 11 Jun 2001 14:41:28 GMT
ctl8505@aol.com (CTL8505) writes:
I am currently in the process of writing a paper on security concerns in the financial services industry. If anybody has inputs plz contact me at Katarkia@aol.com

also look at various gao reports at www.gao.gov in the subject of financial institutions.

report from last year (I think Fed got private sector to cough up something like $3b US ... $300m US apiece from ten different institutions).

Report Number: GGD-00-3

Title: Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk

Abstract: In 1998, Long-Term Capital Management (LTCM)--one of the largest U.S. hedge funds--lost more than 90 percent of its capital. The Federal Reserve concluded that rapid liquidation of LTCM's trading positions and related positions of other market participants might pose a significant threat to already unsettled global financial markets. As a result, the Fed arranged a private sector recapitalization to prevent LTCM's collapse. The circumstances surrounding LTCM's near collapse and recapitalization raised questions that go beyond the activities of LTCM and hedge funds to how federal financial regulators fulfill their supervisory responsibilities and whether all regulators have the necessary tools to identify and address potential threats to the financial system. This report discusses (1) how LTCM's positions became large and leveraged enough to be deemed a potential systemic threat, (2) what federal regulators know about LTCM and when they found out about its problems, (3) what the extent of coordination among regulators was, and (4) whether regulatory authority limits regulators' ability to identify and mitigate potential systemic risk.


.... and from 1997
Payments, Clearance, and Settlement: A Guide to the Systems, Risks, and Issues (Chapter Report, 06/17/97, GAO/GGD-97-73).

Pursuant to a congressional request, GAO provided information about the nation's systems to effect financial transactions between purchasers and sellers of goods, services, and financial assets.


... you might also find interesting reading the postings on Thread between Risk Management and Information Security

https://www.garlic.com/~lynn/aepay3.htm#riskm
https://www.garlic.com/~lynn/aepay3.htm#riskaads

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Ancient computer humor - The Condemned

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient computer humor - The Condemned
Newsgroups: alt.folklore.computers
Date: Mon, 11 Jun 2001 15:57:06 GMT
"Jim Mehl" writes:
Joe, glad you enjoyed it. That same box in my garage has some other gems, which I will try to scan and clean up as I find the time.Just looking at the titles we have "The Rime of the Ancient Programmer", "The Ballad of the 1401", and "The Moment 'Fore Abend".

                   THE CONDEMNED

     WHEN THE EARTH WAS CREATED, THE POWERS ABOVE
GAVE EACH MAN A JOB TO WORK AT AND LOVE.
HE MADE DOCTORS AND LAWYERS AND PLUMBERS AND THEN -
HE MADE CARPENTERS, SINGERS, AND CONFIDENCE MEN.
     AND WHEN EACH HAD A JOB TO WORK AS HE SHOULD,
HE LOOKED THEM ALL OVER AND SAW IT WAS GOOD.

HE THEN SAT DOWN TO REST FOR A DAY,
WHEN A HORRIBLE GROAN CHANCED TO COME IN HIS WAY.
THE LORD THEN LOOKED DOWN, AND HIS EYES OPENED WIDE -
     FOR A MOTLEY COLLECTION OF BUMS STOOD OUTSIDE.
"OH! WHAT CAN THEY WANT?" THE CREATOR ASKED THEN
     "HELP US," THEY CRIED OUT, "A JOB FOR US MEN."
"WE HAVE NO PROFESSION," THEY CRIED IN DISMAY,
"AND EVEN THE JAILS HAVE TURNED US AWAY."
SAID THE LORD, "I'VE SEEN MANY THINGS WITHOUT WORTH -
     BUT HERE I FIND GATHERED THE SCUM OF THE EARTH!"

     THE LORD WAS PERPLEXED - THEN HE WAS MAD.
FOR ALL THE JOBS, THERE WAS NONE TO BE HAD!
THEN HE SPAKE ALOUD IN A DEEP, ANGRY TONE ---
"FOR EVER AND EVER YE MONGRELS SHALL ROAM.
     YE SHALL FREEZE IN THE SUMMER AND SWEAT WHEN ITS COLD -
YE SHALL WORK ON EQUIPMENT THATS DIRTY AND OLD.
     YE SHALL CRAWL UNDER RAISED FLOORS, AND THERE CABLES LAY -
YE SHALL BE CALLED OUT AT MIDNIGHT AND WORK THROUGH THE DAY.
YE SHALL WORK ON ALL HOLIDAYS, AND NOT MAKE YOUR WORTH -
YE SHALL BE BLAMED FOR ALL DOWNTIME THAT OCCURS ON THE EARTH.
     YE SHALL WATCH ALL THE GLORY GO TO SOFTWARE AND SALES -
YE SHALL BE BLAMED BY THEM BOTH IF THE SYSTEM THEN FAILS.
     YE SHALL BE PAID NOTHING OUT OF SORROW AND TEARS -
YE SHALL BE FOREVER CURSED, AND CALLED FIELD ENGINEERS!"

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Ancient computer humor - Memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient computer humor - Memory
Newsgroups: alt.folklore.computers
Date: Mon, 11 Jun 2001 16:01:09 GMT
"Jim Mehl" writes:
Joe, glad you enjoyed it. That same box in my garage has some other gems, which I will try to scan and clean up as I find the time.Just looking at the titles we have "The Rime of the Ancient Programmer", "The Ballad of the 1401", and "The Moment 'Fore Abend".

...
Re: Extended vs. expanded memory just to "refresh your memory"...

"Extended memory" refers to RAM at addresses 100000-FFFFFF. Although the PCAT only permits 100000-EFFFFF.

"Expanded memory" refers to the special Intel/Lotus memory paging scheme that maps up to 8 megabytes of RAM into a single 64K window beginning at absolute address 0D0000.

"Expended memory" refers to RAM that you can't use anymore. It is the opposite of Expanded Memory.

"Intended memory" refers to RAM that you were meant to use. It is the opposite of Extended Memory.

"Appended memory" refers to RAM you've got to add to make your application run.

"Upended memory" refers to RAM chips improperly inserted.

"Depended memory" refers to ROM that you cannot live without.

"Deep-ended memory" refers to RAM that you wish you had, but don't.

"Well-tended memory" is a line from the movie "Body Heat" and is beyond the scope of this glossary.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Ancient computer humor - Gen A Sys

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient computer humor - Gen A Sys
Newsgroups: alt.folklore.computers
Date: Mon, 11 Jun 2001 16:06:21 GMT
"Jim Mehl" writes:
Joe, glad you enjoyed it. That same box in my garage has some other gems, which I will try to scan and clean up as I find the time.Just looking at the titles we have "The Rime of the Ancient Programmer", "The Ballad of the 1401", and "The Moment 'Fore Abend".

...

GEN A SYS
                   Jay G. Elkes
(CLOAD magazine, July 1979)

In the beginning, there was chaos and the Universe
was without form and void. The Lord looked upon His
domain and decided to declare His presence. "I be" he
said, then to correct his grammar added "am."
If the Lord had decided to work on irregular verb con-
jugation first this wouldn't have happened. God would
later curse the English language for its part, but in that
moment I. B. M. came into being.
The Lord looked out upon the I. B. M He had created
and said "This is good." That's what He said, but he
shook his head, wondered what the boys at the User
Group would say, split the light from the dark and went
to bed. Thus passed the Beginning and the end of the
first day.
On the second day, the Lord summoned I. B. M. unto
His presence. "There is chaos out there, and the Uni-
verse is without form and void. I must correct this and I
can use your help, is there anything you can do for me?"
"I can take care of form." I. B. M. replied. "Put me in
charge of computers and I will take care of form for you."
The Lord thought that this was good and said "Let
there be computers. Let I. B. M. have my powers of crea-
tion that pertain to computers and form." Thus saying,
the Lord went off to His second day's  work while I. B. M.
created the 1401.
    On the third day, while the Lord was out, I. B. M.
decided to subdivide the assigned task. "Let there be
systems that make the computer work and let them be
called Operating Systems. Let there also be systems
that make use of the computer and let them be called
Application Systems." Thus, there came into being both
Operating Systems and Application Systems, but there
were no programmers.
    The next morning I. B. M. had to give the Lord a status
report.
"What did you do yesterday?" the Lord asked.
"I invented the operating system" I. B. M. replied.
    "You did?" the Lord shuddered. "Oh dear."
"Yes I did," I. B. M. confirmed, "but I find I need
something only you can provide."
"And what is that?"
"I need programmers to use my computers, to
operate my operating system and to apply my applica-
tions."
"That can't be done now," said the Lord. "This is
only the fourth day and there won't be people until the
sixth day."
"I need programmers and I need them now. If they
can't be people they can't be people, but we have to work
this out today."
"Give me some specifications and I'll see what I can
do." I. B. M. hastily worked up specs for programmers
(are specs ever anything other than hasty) and the Lord
reviewed them.
The Lord knew the specs weren't sufficient but
followed them anyway. He also made some pro-
grammers that did just what programmers were
supposes to do, just to spite I. B. M. The programmers
and I. B. M. spent the rest of the day creating the
Assembler and FORTRAN. On the morning of the fifth
day, I. B. M. reported to the Lord once again.
    "The programmers you created for me have a
problem. They want a programming language that is
easy to use and similar to English. I told them you had
cursed English, though I still don't know why. They
wanted me to ask your indulgence on this."
The Lord had cursed English for good reason, but
didn't want to explain this to I. B. M. He said "let there be
COBOL" and that was that.
    On the status report of the next day I. B. M.
announced that computers had gone forth and multi-
plied. Unfortunately, the computers still weren't big
enough or fast enough to do what the programmers
wanted. The Lord liked the idea of going forth and
multiplying, and used the line Himself later on that day.
This sixth day being particularly busy, He declared "Let
there be MVS" and there was MVS.
On the seventh day God had finished creation and
computers had COBOL and MVS. The Lord and I. B. M.
took the day off to go fishing. I. B. M. hung a sign on the
door to help programmers in his absence.
    IF AT FIRST YOU DON'T SUCCEED, TRY TRY
AGAIN - AND HAVE THE FOLLOWING READY
BEFORE CALLING I. B. M. This was the start, and by
some accounts the end, of I. B. M. documentation.
    On the start of the second week the programmers
went over I. B. M.'s cathode ray tube directly to God.
    "We have a horrible problem," they complained.
"Our users want systems that perform according to their
expectations."
"Users!" the Lord bellowed. "Who said that you
should have users! Users are the difference between
good and bad applications, a function I have reserved
unto myself! Who authorized you to have users?"
"Well, I. B. M..."
"I. B. M.! You! You did this to my programmers! You
gave them knowledge of good and evil. For that you
shall suffer through eternity!"
"Let there be competition. Let it be called Anacom,
and Burroughs, and C.D.C."
The Lord went through the alphabet several times.
"With all this competition you shall still suffer the pain
of antitrust legislation all the days of your existence."
    This was the start of the second week, and it seems
an appropriate place to conclude our report. In case you
missed something, a summary of key points follows.
Users and their needs are and always have been a
subject of dispute. Nobody can learn English because it
is cursed by God. I. B. M. manuals are doubly cursed and
therefore twice as hard to understand. Of the program-
ming languages, only COBOL can claim divine origin.
People are people, but programmers are something
else.
Computers may be a gift from heaven, but there's no
divine help in getting them to work. Because of I. B. M.'s
initial assignment, there are more forms than anyone
knows what to do with. Finally, chaos was part of the
original state of the Universe and not a product of the
data processing industry.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Ancient computer humor - DEC WARS

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient computer humor - DEC WARS
Newsgroups: alt.folklore.computers
Date: Mon, 11 Jun 2001 16:09:51 GMT
"Jim Mehl" writes:
Joe, glad you enjoyed it. That same box in my garage has some other gems, which I will try to scan and clean up as I find the time.Just looking at the titles we have "The Rime of the Ancient Programmer", "The Ballad of the 1401", and "The Moment 'Fore Abend".

I have dec wars 1 & 2 ... but they are somewhat larger (20673, & 27521 bytes)

small xtract


From: ucsfcgl!ucbvax!mhtsa!ihnss!harpo!npois!jak
Date: Fri May 21 13:55:19 1982
Subject: all 7 old decwars articles
Newsgroups: net.sources

Subject: DEC WARS
Have you ever wondered what happened to all those characters eaten by
arpavax?  Well, we found most of them loitering around on our system,
taking up disk space.  So we're putting them back out on the net where
they belong.  Any resemblence to events real or imagined is purely
intentional.

        A long time ago, on a node far, far away (from ucbvax).....

XXXXX   XXXXXX   XXXX           X    X    XX    XXXXX    XXXX     X
X    X  X       X    X          X    X   X  X   X    X  X         X
X    X  XXXXX   X               X    X  X    X  X    X   XXXX     X
X    X  X       X               X XX X  XXXXXX  XXXXX        X    X
X    X  X       X    X          XX  XX  X    X  X   X   X    X
XXXXX   XXXXXX   XXXX           X    X  X    X  X    X   XXXX     X

Luke had grown up on an out of the way terminal cluster whose natives spoke
only BASIC, but even he could recognize an old ASR-33.

"It needs an EIA conversion at least," sniffed 3CPU, who was (as usual)
trying to do several things at once.  Lights flashed in Con Solo's eyes
as he whirled to face the parallel processor.

"I've added a few jumpers.  The Milliamp Falcon can run current loops around
any Imperial TTY fighter.  She's fast enough for you."

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Remove the name from credit cards!

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Remove the name from credit cards!
Newsgroups: alt.security
Date: Mon, 11 Jun 2001 17:43:26 GMT
andrew writes:
The original idea, before online authentication, was that forging cards was too difficult, and that the merchant could maybe ask for corroborating ID, also too difficult to forge.

slightly related


http://www.hypercom.com/web/news/display.asp?releaseID=346
Media Releases
Phoenix, AZ
6/7/01

Hypercom Launches Attack on Credit Card Skimming

Hypercom Chairman and Chief Strategist Calls for Industry To Combat New Dangerous Form of Skimming

(Hypercom Corporation: NYSE: HYC) -- Credit card "skimming" is an alarmingly escalating form of fraud that is victimizing consumers, causing havoc with merchants, and costing the industry hundreds of millions of dollars every year. Skimming fraud takes many forms, but most often involves a cardholder turning over physical possession of his or her card to a retail or restaurant employee, who then swipes the card through a small, illegal card reader, called a "skimmer." The skimmer copies the data encoded on the card's magnetic stripe. This information is then used to manufacture counterfeit cards that are used to rack up illegal charges. Industry sources estimate that the average skimmed credit card will generate some $2,000 in fraudulent charges before being detected.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Mon, 11 Jun 2001 18:49:54 GMT
"Carl Sommer" writes:
Disclaimer: I'm an absolute novice at assembler, thus I'm asking this question, regarding some inherited code.

Consider the following:

LOOP TS FLAG
BNZ LOOP
...
...
...
FLAG DC '0'

My understanding of the TS instruction is that it tests the high-order bit, and then sets the entire byte to 1's. Doesn't that mean that on the second interation of the loop it's always going to pass? If so, shouldn't the code really be written like this?

LOOP TS FLAG
BZ WORK
NI FLAG,'80'
B LOOP
MORE blah blah blah

But I'm not convinced this works. What if another task actually has the flag? Is this a situation where I should be using the Compare and Swap (CS) instruction?

Thanks

Carl


TS was defined for multiprocessor work ... the thread/processor that actually sets the flag is the only one that clears the flag (when it is done). The byte can be used as a "lock" for thread/processor serialization.

other threads/processors that don't set the flag .... can either do TS "spin-loop" on the flag ... attempting to catch it when it has been "cleared" (by the thread/processor that actually holds/set the flag/lock) ... or go off into some fancier serialization code (possibly some form of wait ... or some combination; spin-loop a maximum number of times before going off into more complex serialization).

TS was defined in the 60s and used on the 360 model 65 and 360 model 67 multiprocessors.

Charlie Salisbury's work on fine-grain locking resulted in the Compare & Swap instruction (aka his initials are CAS ... which was the original mnemonic for compare&swap).

random ref:
https://www.garlic.com/~lynn/2001e.html#73

the task given by the "owners" of POP was to come up for a programming paradigm for CAS where it was useful in a single processor environment (not just multiprocessor) ... which gave rise to the programming notes about multi-threaded serialization that works on both single and multi processor configurations.

CAS can be used in a manner similar to TS (i.e. for setting/obtaining a lock) ... however CAS can also be used for various operations involving atomic storage update avoiding having to perform serialization via a separate locking operation.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch
Date: Tue, 12 Jun 2001 02:52:12 GMT
name99@mac.com (Maynard Handley) writes:
A far more useful analysis than these pseudo-technical discussions would be to look at the dynamics behind why IBM, after the success of S/360, felt it was useful to create two different architectures (AS/400 and RS/6000 --- I omit PCs as being a different issue). The question then is the extent to which those pressures are relevant to Intel, and the extent to which Intel wins by selling both x86 and Itanium for the foreseeable future vs the win (any?) in killing x86 ASAP. Realistically, is Intel in any position to kill IA64? If they do so, and AMD introduces sledgehammer, is that not the end, not necessarily of Intel, but of Intel as #1?

ot ... as/400 & rs/6000 were rather late in the game ...

Series/1, System/3, System/32, System/36, System/38, original CISC AS/400 (not to mention s/7, 1800, etc).

much of that market is currently held by PCs ... while the AS/400 has remapped to RISC and moved up market.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 12 Jun 2001 14:11:42 GMT
"aaron spink" writes:
You make an incorrect assumption that IBM planned to make 2 different architectures. The impression I've always had is that historically there wasn't a whole lot of planning between the kingdoms that used to make up IBM.

also note that majority of the 360s & 370s were microcoded engines ... aka the processors weren't 360/370 ... and the "microcode" (aka software) running on the processor was used to emulate 360/370 ... as well as lots of other stuff.

the 370/115 & 370/125 were actually a multiprocessor system with up to 9 micro-engines on a shared memory bus. One of the microprocessor was programmed to emulate 370 processor ... and the other (up to eight) processors implemented other function. In the 370/115 all the processors were the same ... in the 370/125 they used a "faster" micro-processor for the 370 engine.

random ref:
https://www.garlic.com/~lynn/submain.html#360mcode

The 801 RISC currently seen in RS/6000, AS/400, apple, etc ... started out as (at least) two projects:

Fort Knox Displaywriter

The IBM office products division had 801/ROMP project for a displaywriter follow-on that got canceled and the project morphed into the PC/RT ... with unix.

Fort Knox was a very large project (possibly not as large as FS) that had a lot of people on it ... that was going to standardized on 801 processors for all the (at least low-end) 360/370 microprocessor engines. Part of it was that a number of the low-end micropocessor engines were deliverying 370 at 10:1 (i.e. ten microprocessor instructions for every 370 instructions, aka to get 100kips for the 370/125 something like a 1mip engine was used) and work on 801 engines was supposedly going to make that much more efficient. Fort Knox also got killed before it came to fruition (although subsequent 360/370 processors saw a lot of 801 chips deployed as various embedded functions). I provided some amount of the analysis that got Fort Knox got killed ... although the actual report was written by --- -----.

random ref:
https://www.garlic.com/~lynn/2001f.html#0

then of course there is the whole FS thing ... some of it as been running in a thread in a.f.c ng (aka "claim that IBM got burnt so badly by FS that it took them a generation to recover")

https://www.garlic.com/~lynn/2001f.html#33

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 12 Jun 2001 14:44:37 GMT
Anne & Lynn Wheeler writes:
ot ... as/400 & rs/6000 were rather late in the game ...

Series/1, System/3, System/32, System/36, System/38, original CISC AS/400 (not to mention s/7, 1800, etc).

much of that market is currently held by PCs ... while the AS/400 has remapped to RISC and moved up market.


and then there were all the microprocessors used in various support roles in a 360/370 complex ... like control units and devices. A football sized room might hold several "360/370" processors ... but the rest of the room would be filled with hundreds of control unit and devices ... all with their own processors ...things like uc.5 (which was also used in the 8100 system) and jib' (jib-prime).

one of my undergraduate projects was a 360 controller clone replacement that we (initially) built using interdata/3 ... which originated the whole ibm plug-compatible control unit business.

https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 13 Jun 2001 06:20:54 GMT
"Jim Mehl" writes:
Lynn, are you sure about that? My recollection is that 801 was a John Cocke project that went on at Yorktown for about 7 years.

Jim Mehl


sorry ... should have said product projects.

801/risk had been around a lot in the 70s .... cpr, pl.8, etc

random ref:
https://www.garlic.com/~lynn/95.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Ancient computer humor - The Condemned

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient computer humor - The Condemned
Newsgroups: alt.folklore.computers
Date: Wed, 13 Jun 2001 14:23:02 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Ever since I've been in the industry it's always been split three ways between the least deserving parties: sales, marketing and senior management.

i just finished boyd's biography ... and there is this part where he is giving advice to somebody about what do they want to do with their life ... you can either be a "doer" or a "take credit for doing" (not entirely original ... shows up many other places).

random refs:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercomputers?
Newsgroups: alt.folklore.computers
Date: Thu, 14 Jun 2001 14:30:56 GMT
The.Central.Scrutinizer.wakawaka@invalid.pobox.com () writes:
TSS/370 was also notoriously slow, enough to be called "Time Spending System", back when 8M cost $1million or more. Programmers do have a habit of using all the cycles and storage that engineers throw at them and then some.

Try alt.folklore.computers for lots and lots and lots of discussion of this sort of thing; it's not really on topic for afu.

Whoops; that was the newsgroup I intended to post in....


on cp/67 running on 360/67 (something around .5mips, same machine that tss/360 ran on) with 768k of memory (104 4k pages after fixed kernel requirements) ... would run something like 80 users doing mix-moded workload (interactive, compile/debug, batch, etc) ... with the CPU clocking near 100% utilization and less than 1sec response to something like 90th percentile of interactive requests.

random refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

tss/370 typically ran on 370/168 (2.5mps to 3mips depending on cache size, etc). standard follow-on to cp/67 for 370s was vm/370. There were a number of vm/370 installation on 370/168, 8mbyte to 16mbyte of memory, 300 users doing mix-mode workload, 100% cpu utilizaion, .11sec response to something like 90th percentile of interactive requests.

random ref;
https://www.garlic.com/~lynn/94.html#43

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercomputers?
Newsgroups: alt.folklore.computers
Date: Thu, 14 Jun 2001 22:27:42 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
I often wondered how IBM systems managed to get that sort of response time on comparatively modest hardware with so many users; both VMS and UNIX struggled on similar hardware with perhaps a quarter of that number. I assume it may be because these two systems were largely interactive and ran most things in the foreground at normal timesharing priority whereas the IBM systems submitted their big processing jobs to a batch system; otherwise, I can't really figure it out.

the ibm "control units" handled keystrokes outboard of the main processor ... interrupts came in for whole lines (or sometimes whole screens) on input ... and whole screens on output.

however, that didn't prevent tremendous amount of "interactive" workload on CP/67 & VM/370 (or even TSS/360/370). The "batch" systems for those processors did frequently work as you described (i.e. workload submitted for serialized batch processing rather than possibly hundreds of concurrent virtual address spaces & associated processes contending).

I completely redid the CP/67 algorithms and pathlengths which significantly improved things which shipped in stages as part of the standard IBM product. At the time I started on cp/67, it was bumping its head with performance degradation with 30 users and poor interactive response and difficulty of handling concurrent interactive, mixed-mode and batch workload (i.e. on 768kbyte 360/67). By comparison, TSS/360 at the same time on the same hardware would experience 4-5 second trivial response with only 4 interactive users. As noted in previous posting, I was able to significantly improve the CP/67 performance and thruput.

The algorithm and some of the pathlength work was dropped in the initial conversion to VM/370 ... in part because of customer group lobbying (SHARE and or organizations) I was given the opportunity to ship the VM/370 "resource manager" (much of the original CP/67 algorithm and pathlength work ... a significant portion done while an undergraduate).

Part of the undergraduate work in the '60s was "clock" replacement algorithm ... which should up in some unixs 10 to 15 years later.

random refs:
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#hone
https://www.garlic.com/~lynn/submain.html#mmap

misc. other stuff was project as an undergraduate that adapted an Interdata/3 as a front-end terminal control for the ibm backend (and credited with originating the 360 PCM market)

https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercompu

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercompu
Newsgroups: alt.folklore.computers
Date: Thu, 14 Jun 2001 23:56:51 GMT
kaih=82pCudbHw-B@khms.westfalen.de (Kai Henningsen) writes:
Whereas I remember working on a 4381 (16 MB, login limit at 100 users), under CMS, and having a typical response time much more like 10 seconds when fully loaded.

Of course, on the one 3277GA I could use that actually hung off a real channel, typical response time was "faster than you can hit page down". All the other terminals hung off VTAM running under OS in one virtual machine. Even though that one machine ran V=R ... ouch!

Though I also remember the test phase before CMS replaced VSPC, when CMS sessions went via some even more involved hack I don't remember, which involved keyboard unlocking and relocking before the answer came. Double ouch.


I take no responsibility for VTAM. VTAM/SNA terminal response was significantly slower than local, non-VTAM managed terminals. Also, there were "field" upgrade to handling the 327x keyboard lock/unlock annoyance.

https://www.garlic.com/~lynn/99.html#69

And I got one complex to run even faster when "local" channel attached (non-VTAM) 327x controllers were remoted at the end of HYPERchannel emulated channels along with improvement in interactive response (system performance improved 10-15%).

https://www.garlic.com/~lynn/subnetwork.html#hsdt

furthermore the VTAM crowd didn't look on me kindly since the controller I worked on as an undergraduate that gave rise to the 360/370 PCM business was a "terminal/line" controller (aka in competition with the VTAM/SNA family of products).

https://www.garlic.com/~lynn/submain.html#360pcm

A later effort doing something similar

https://www.garlic.com/~lynn/99.html#65
https://www.garlic.com/~lynn/99.html#66
https://www.garlic.com/~lynn/99.html#67

CMS replacing "your" VSPC ... or in IBM terms?????

CMS predated VSPC ... which was initially called PCO (personal computing option) but a survey of foriegn acronyms resulted in it being changed to VSPC (some reference to the use of PCO in France).

In the early '70s there was a coporate concerted effort to "do in" CMS ... using both VSPC(aka PCO) and TSO. Part of the effort involved a PCO performance modeling team that was generating "interactive response" numbers for various scenerio modeling benchmarks and corporation forcing the CMS group to perform similar (but real) performance benchmarks (and one point nearly the whole development team was sidetracked into running real CMS benchmarks that were compared against the VSPC/PCO modeled benchmarks). The modeling benchmarks showed PCO approximately the same response or slightly better than CMS. When they actually got real live VSPC/PCO benchmarks ... it turned out that VSPC/PCO was ten times slower than the modeling numbers had been indicating (and ten times slower than CMS).

Another interesting was an VM/CMS & MVS/TSO benchmark "bakeoff" done by CERN approximately the same time (CERN was a large VM/CMS installation ... and one could claim that the VM/CMS GML base at CERN, was at least partially responsible for giving rise to HTML).

CERN generated a detailed report of the comparison. Various IBM parties immediately classified the document as IBM CONFIDENTIAL RESTRICTED (available on need to know only basis, 2nd highest classification below numbered and personally signed-out copies). The report wasn't classified to non-IBMers ... but internally inside IBM, only IBMers with a authorized need-to-know were allowed to have a copy (aka the CERN report did nothing to support any effort to replace VM/CMS with either VSPC/PCO or MVS/TSO).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Price of core memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Price of core memory
Newsgroups: alt.folklore.computers
Date: Fri, 15 Jun 2001 00:50:28 GMT
glass2 writes:
Well, no, and I was hoping that no one was going to ask, since by doing that, I give away the secret[1]:

CP Q CPUID
CPUID = FF64072074700000

[1] Ok, for those not in the know, the 7470 CPU model corresponds to a P/370 card, which is a S/370 on a card that plugs into a PS/2 and gives you a full and complete S/370 (with a very minor exception or two, such as 4K storage keys). Of course, it's been obsolete for quite some time, having been replaced by the P/390 cards (full ESA/390 on a card), although I've heard a rumor that those may have been withdrawn from marketing.

Dave


minor related information
https://www.garlic.com/~lynn/2000e.html#55
https://www.garlic.com/~lynn/2000e.html#56

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Logo (was Re: 5-player Spacewar?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Logo (was Re: 5-player Spacewar?)
Newsgroups: alt.folklore.computers,rec.games.video.classic,comp.lang.logo,alt.sys.pdp10
Date: Fri, 15 Jun 2001 03:31:03 GMT
John Sauter writes:
One item in the chart surprised me: that you could put a 2361 on a model 50. I didn't think the IBM 360/50 had an external memory bus. John Sauter (J_Sauter@Empire.Net)

i believe some number of 360/m50 machines had 8mbyte "ampex" memory.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's  supercomputers?
Newsgroups: alt.folklore.computers
Date: Fri, 15 Jun 2001 17:26:33 GMT
Lars Poulsen writes:
It might be clearer to those who don't know your oeuvre, if you phrased this as "the sweeping clock hand page replacement algorithm for virtual memory paging systems". (When I first read this, I was wondering why the time-of-day clock would need replacement!)

aka, the algorithm acted like the hands of the clock "sweeping" around storage ... as opposed to time-based. In fact, earlier time-based page harvesting algorithms tended to do poorly, partly because the consumption for virtual pages tended to vary as to load, configuration, demand ... and not as to fixed time interval (i.e. the sweeping of the clock hands and the harvesting of pages was specifically demand driven and not time driven as in earlier implementations).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's   supercomputers?
Newsgroups: alt.folklore.computers
Date: Fri, 15 Jun 2001 17:41:31 GMT
Joe Pfeiffer writes:
And I was surprised to see that anybody had devoted much thought to crystals and generating pulse and such (as opposed to controlling clock skew, of course).

i was actually involved for 3-4 months with several other people worrying about the 370 TOD clock ... 64 bits ... where bit 12 (or 51, depending on which end you counted from) represented microseconds.

The original spec. was calling for time zero to be the first day of the 20th century ... which we spent some time researching and as well as what to do about leap seconds. I think that so many people got the first day of the 20th century wrong that they finally changed the spec. to be january 1st, 1900.

https://www.garlic.com/~lynn/2000.html#2
https://www.garlic.com/~lynn/2000.html#4

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's  supercomputers?
Newsgroups: alt.folklore.computers
Date: Fri, 15 Jun 2001 19:57:25 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
I'm not clear on the concept here. I was under the impression that the Least Recently Used paging method was about as optimal for general usage as it got. Sure, one can increase the level of optimization for for specific cases but LRU seemed to be King of the Hill. Show me the error in my thinking.

true LRU requires that all access to 4k memory "chunks" are maintained in true order (i.e. every hardware operation that involves a storage access would update the ordering) ... which is very expensive. Various mechanisms are used to approximate true LRU ordering of every page for every, single storage access; clock is one of the implementations that approximates LRU reasonably well.

OPT is the algorithm that optimally harvests pages for selections ... but that requires perfect fore knowledge.

In the early '70s we did some detailed storage reference traces across a wide range of applications and systems and then ran simulated true LRU against various forms of clock, other page harvesting algorithms as well as compared to OPT (optimally page harvesting). Straight clock could typically come within 10-15% of true LRU. However, a very particular variation of clock was shown to be slightly better to 10-15% better than true LRU.

Basically, generic LRU (whether true LRU or LRU-approximations like most clock-based harvesting) is based on the assumption that storage locations that have been recently referenced are the most likely to be referenced in the future. An easy violation of that assumption is paged mapped files that are being processed sequentially ... in fact, the most recently accessed storage is the least likely (not most likely) to be referenced in the future (aka a most-recently-used page harvesting would be better than least-recently-used page harvesting in this particular situation).

Basically the clock variation that would beat LRU ... would use LRU-approximation when LRU was working well but automagically switch to random page harvesting in situations where LRU was not performing well.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's  supercomputers?
Newsgroups: alt.folklore.computers
Date: Fri, 15 Jun 2001 21:26:35 GMT
Anne & Lynn Wheeler writes:
Basically the clock variation that would beat LRU ... would use LRU-approximation when LRU was working well but automagically switch to random page harvesting in situations where LRU was not performing well.

the automagically was the fun part ....

basically, clock is good because it is a very close approximation to true LRU and at the same time can be implemented very, very efficiently (small compact code and short pathlength). the "automagic" code looked, tasted, and smelled almost exactly the same as any normal clock .... there was no code that tested for doing LRU selection or random selection ... it was just the normal clock code ... and LRU-approximation poor operation was frequent enuf that being able to dynamically switch back & forth between LRU and random resulted in better performance than "true" LRU.

while automagic may give extra-ordinarily good performance ... the downside can be long term maintenance (i.e. incorporated in a standard product and is still being widely distributed 10, 20 years later) .. few if anybody else actually understands the magic.

slightly related is that frequently where true & approximate LRU page harvesting behave badly is when LRU and FIFO give the same results i.e. ordering based on true LRU shows no difference than a FIFO ordering ... this tends to be exhibited in the "chase the tail" cyclic scenerios.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercomputers?
Newsgroups: alt.folklore.computers
Date: Sat, 16 Jun 2001 16:30:13 GMT
Brian Inglis writes:
VM ran everything with IIRC 3 levels of timesharing priority -- <1 quantum => long terminal I/O wait, ~1 quantum => short disk I/O wait, 8 quanta => compute bound sessions, roughly. No batches processed here, and no JCL required. You could run most batch OS programs in an interactive session or run a batch OS as a session of its own.

the dispatching priority was fairshare ... something i originated while an undergraduate in the '60s ... got into cp/67 ... then initially dropped from vm/370 but then re-instituted as part of the resource manager for vm/370. the following is the "blue letter" announcement for the resource manager (posted here last month 25 years after the May 11th, 1976 announcement):

https://www.garlic.com/~lynn/2001e.html#45

in effect all three levels had a form of deadline dispatching priority based on fairshare resources ... the algorithm attempted to adjust the bias as to the consumption of resources to the resource that was representing system bottleneck. no real memory or I/O bottlenecks then the fairshare algorithm was effectively based on straight cpu consumption ... dynamically as other resources represented bottlenecks, the algorithm would adjust the fairshare consumption of the other resources for calculating the dispatching deadline.

while there was three quantum sizes (effectively trivial interactive, mixed-mode and background) ... the fairshare calculations were the same but proportional to the size of the quantum. prior to the resource manager, VM/370 just had two level quanta ... the resource manager introduced the third level.

trivial interactive tended to have better dispatching deadling both because the quanta was smaller as well as they tended to have been using fewer resources.

The resources manager also contained significant reworking of internal vm/370 structure for multiprocessor support (not mentioned in the announcement letter) and restructuring of the kernel serialization process which eliminated all known serialization failures as well as all known situations giving rise to "zombie" processes.

As part of getting ready to release the resource manager a new set of procedures was developed for doing automated benchmarking was done and over 2000 benchmarks were executed that took three months elapsed time ... which validated the resource manager across a wide range of load, configurations and scheduling policy settings. The benchmarks also included extreme outliers ... like workload that was ten times more extreme that any seen in normal operation (in one case the paging queue was so long that it would take 1 second elapsed time to service a page fault).

Prior to the rework of the basic system serialization function, the extreme outliers would be quarenteed to crash the kernel.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercomputers?
Newsgroups: alt.folklore.computers
Date: Sat, 16 Jun 2001 17:26:34 GMT
Brian Inglis writes:
IBM VM and utilities were designed to keep code path lengths short on common operations -- I've heard numbers like 1000 instructions executed per simple screen interaction. The screen editor was designed to let you enter or modify 24 lines locally or request a number of operations with each screen sent, and a full screen could be blasted out each time if required. Terminal I/O command streams were also craftily coded to send the minimal amount of screen data, then hang on a read of zero bytes; on receiving the interrupt, another terminal command read only the modified sections of the screen into memory, and I think the app updated a memory image of the screen, processed and updated it, and sent as above.

misc. other stuff from long ago and far away ... I one point I had gotten CP/67 pathlength to 1) take page fault, 2) select page for replacement, 3) schedule page read, 4) task-switch, 5) take i/o interrupt from page read, and 6) task-switch to under 350 instructions. The previous also included a pro-rated portion of page writes (i.e. percentage of pages selected for replacement that had to be written as part of harvesting). This included ability to "beat" true-LRU across a wide range of loads and configurations.

as to the 3270 i/o ... there was some peculiar scheduling issues. attached is extract from a performance analysis of 3270 and non-3270 terminal i/os (and fixes) done 20 years ago. A similar problem (but because of different causes) showed up recently running a large number linuxes under vm ... so i dug out the original analysis and fix description and forwarded to the people working on the problem (the following has been abbreviated ... in some cases to protect the innocent).

... email from long ago and far away

Date: 04/20/83 09:28:20
From: wheeler

there are several fixes that I know of. I did two, one was a modification to CMS (which I believe is being included in SP2 or SP3) and the other is a modification to CP. The CMS modification causes all output lines in the CMS terminal output buffer to be chained together and written with one SIO. This cuts down on the q-drops/q-adds. It is similar to the block write for 3270 screens, except that is a modification to CP, works only for 3270 terminals, and CMS still goes thru drop/add ... but very, very quickly (i.e. immediately). My change works for all types of terminals.

My other change was to CP. I added code to cp to 1) remember how long was the previous idle duration, 2) not drop from queue if the previous idle was less than a threshold (about 100 mills), and 3) timer-driven pre-emption scheme for the psuedo idle in-q virtual machines whenever there was an eligible list ... code didn't bother when there wasn't and eligible list.

There were some other changes done to CP by other people. About a year ago there was a severe performance situation at the <some gov. TLA> (three letter agency) Over an extended period of time, almost everybody from POK VM made a call at the account in an attempt to resolve the issue. A lot of essientially random performance "improvements" were made to the system under direction of both POK & YKT personnel. In almost all cases the customer back the changes off after running for some period of time. Because the <some gov. TLA> runs almost exclusively ascii, 1200baud terminals, The 3270 block write changes (that were shipped as part of VM/SP1) had no affect on their operation. At some point XXXXXX visited and created a change that would cause a 300 millisecond q-drop delay for all idle drops. Under heavy load that actually caused performance to decrease rather than increase. That CP change may have some official IBM standing.

Eventually, the SE on the account arraigned to have my CMS modifications applied to the <some gov. TLA> system with the result that there was a reduction in q-add/drop activity from about 65/sec to 43/sec.

I'm not aware that my CP modifications have any official IBM standing. It involved more code than the 300-millisecond q-drop delay .... but was also more uniform in creating a performance improvement rather than performance degradation.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I almost forgot, I did another CP modification which might have some official IBM standard which applies to 3270 full-screen operation. In the past a full screen operation consisted of:


attn interrupt  ->     add to queue
full screen read
wait
                            q-drop
read completes  ->     add to queue
process data
update screen with write
wait
                            q-drop
write completes ->     add to queue
mark write as complete
wait
                            q-drop

CP has code that attempts to recognize when there is an "high-speed" device I/O active. In such cases there is a predication that the virtual machine is not really idle and should be left in queue. The problem occurs in determing of the active I/O currently going on for the virtual machine, which of it is "high-speed" and which is "low-speed". It turns out that CP correctly identifies that local 3270 operation are in fact high-speed i/o (comparable in duration to disk i/o) and will not drop the virtual machine from queue. In the case of CMS, the virtual machine is getting dropped from queue tho and it is taking three queue transitions instead of one to process a full screen operation. The problem is in the implementation. First the implementation is rather complex (and therefor prone to errors) and second the implementation uses the virtual device type to make high-speed/low-speed determination. It turns out that the virtual console for CMS is virtual device type 3215 which is a slow speed device (it is 3215 even if the real device type is 327x).

To solve the problem I resurrected some CP/67 code which was the original way I designed and implemented the high-speed/low-speed determination logic. MEMO describing this in some addtional detail is on its way. This high-speed device busy code may in fact have some official IBM standing.


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

JFSes: are they really needed?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JFSes: are they really needed?
Newsgroups: comp.arch.storage,alt.folklore.computers
Date: Sat, 16 Jun 2001 19:20:06 GMT
David Brower writes:
Non journaling filesystems are not "faster" for many things, if they are faster for anything. When you use a non-journaling system, you are betting that your recovery time isn't going to be frequent or bad enough that the extra money you would had to have spent for a JFS was justified. If the JFS is/was free, and performs about the same, you'd be silly not to use the JFS.

There are distinctions to be made for systems that journal metadata, obviating the need for fsck/chkdsk recovery, and those that journal data and metadata. Those that journal data too tend to be slower, but need not be depending on the access pattern.

-dB


note that the original (unix) journaling file system for AIX (predating other implementations) used 801/risc "database" memory for the metadata aka all the at&t svid filesystem metadata was mapped into a "database" memory segment ... basically option that could track "lines" of storage that were modified.

at commit points ... it would scan the metadata segment looking for lines that were "dirty" and journal them. The "database" memory concept was that the application no longer needed to worry about a "transaction" API ... just about commit points ... and the hardware ability to track dirty storage lines could be used "behind" the scenes to relatively painlessly add transaction journaling to existing applications.

the palo alto group ... looking to port journaling to non-801/risc platform redid the AIX JFS with explicit transaction calls (i.e. data to be updated was tracked explicitly with transaction API calls inserted into the filesystem implementation). It turned out that this implementation was noticably faster than the "database" memory implementation (even on the same hardware platform) because it eliminated the commit post-scanning for dirty storage lines (in part because the total metadata space was significantly larger than the amount of data involved in the typical commit).

random refs:
https://www.garlic.com/~lynn/2001f.html#0

at the next level ... the logging penalty tends to show up most when there is lots of additional arm contention ... i.e. the need to write the logged data moves the arm out of position and then it has to be immediately moved back. however, it could be possible to improve filesystem performance with logging if there were relatively small amount of "recoverable" storage for log data (eliminating contention with disk arm) and using the knowledge that metadata was being logged in order to make metadata "lazy" writes even lazier still (possibly net reduction in total writes).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

JFSes: are they really needed?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JFSes: are they really needed?
Newsgroups: comp.arch.storage
Date: Sat, 16 Jun 2001 19:52:36 GMT
"Bill Todd" writes:
The 1991 date above applies to the first (that I know of) LFS, not JFS. The first JFS I'm aware of (Cedar) dates from the mid-to-later-'80s (1986?) - sorry if my 'almost' above was misleading in that regard.

As far as commercial JFSs go, I think that the VxFS release date predates NTFS, and AIX's JFS may as well (though not by much). SGI's XFS came in around then too, but likely first shipped a bit later (and AdvFS was a bit later as well). The Episode local (JFS) file system for the Andrew File System (from CMU, later AFS and OSF/DCE/DFS commercial products from Transarc) predates NTFS too.


log structured file system ... from 91 time frame ... by some of the same people at berkeley that did the BSD fast file system. basically all writes was to a "new" location; not just metadata ... but file data also. periodically there was "cleaning" that had to be done to recover (garbage collect) disk records that contained stale data.

random refs:
https://www.garlic.com/~lynn/93.html#28
https://www.garlic.com/~lynn/93.html#29

from Margo Seltzer's thesis
File System Performance and Transaction Support by Margo Ilene Seltzer Doctor of Philosophy in Computer Science University of California at Berkeley Professor Michael Stonebraker, Chair

This thesis considers two related issues: the impact of disk layout on file system throughput and the integration of transaction support in file systems. Historic file system designs have optimized for reading, as read throughput was the I/O performance bottleneck. Since increasing main-memory cache sizes effectively reduce disk read traffic [BAKER91], disk write performance has become the I/O performance bottleneck [OUST89]. This thesis presents both simulation and implementation analysis of the performance of read-optimized and write-optimized file systems. An example of a file system with a disk layout optimized for writing is a log-structured file system, where writes are bundled and written sequentially. Empirical evidence in [ROSE90], [ROSE91], and [ROSE92] indicates that a log-structured file system provides superior write performance and equivalent read performance to traditional file systems. This thesis analyzes and evaluates the log-structured file system presented in [ROSE91], isolating some of the critical issues in its design. Additionally, a modified design addressing these issues is presented and evaluated. Log-structured file systems also offer the potential for superior integration of transaction processing into the system. Because log-structured file systems use logging techniques to store files, incorporating transaction mechanisms into the file system is a natural extension. This thesis presents the design, implementation, and analysis of both user-level transaction management on read and write optimized file systems and embedded transaction management in a write optimized file system. This thesis shows that both log-structured file systems and simple, read-optimized file systems can attain nearly 100% of the disk bandwidth when I/Os are large or sequential. The improved write performance of LFS discussed in [ROSE92] is only attainable when garbage collection overhead is small, and in nearly all of the workloads examined, performance of LFS is comparable to that of a read-optimized file system. On transaction processing workloads where a steady stream of small, random I/Os are issued, garbage collection reduces LFS throughput by 35% to 40%.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

JFSes: are they really needed?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JFSes: are they really needed?
Newsgroups: comp.arch.storage
Date: Sat, 16 Jun 2001 20:23:41 GMT
& usenet '93 paper ....
Seltzer et al. An Implementation of a Log-Structured File System for UNIX

An Implementation of a Log-Structured File System for UNIX

Margo Seltzer -- Harvard University Keith Bostic -- University of California, Berkeley Marshall Kirk McKusick -- University of California, Berkeley Carl Staelin -- Hewlett-Packard Laboratories

ABSTRACT

Research results [ROSE91] suggest that a log-structured file system (LFS) offers the potential for dramatically improved write performance, faster recovery time, and faster file creation and dele- tion than traditional UNIX file systems. This paper presents a redesign and implementation of the Sprite [ROSE91] log-structured file system that is more robust and integrated into the vnode interface [KLEI86]. Measurements show its performance to be superior to the 4BSD Fast File System (FFS) in a variety of benchmarks and not significantly less than FFS in any test. Unfortunately, an enhanced version of FFS (with read and write clustering) [MCVO91] provides comparable and sometimes superior performance to our LFS. However, LFS can be extended to provide additional functionality such as embedded transactions and versioning, not easily implemented in traditional file systems.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Sat, 16 Jun 2001 23:00:58 GMT
Neil W Rickert <rickert+nn@cs.niu.edu> writes:
The idea is that the first process to hit the code flows through. Second and latter processes spin on the loop. When the first process has finished the protected computation, the MVI resets the flag, and that allows one of the other processes to enter the critical region.

Yes CS is more flexible, and can often be used more efficiently. But to be maximally effective, it requires a different coding approach from the above.


a typical CS operation loads the current value of the storage location and the new value of the storage location (which can be something as simple as incrementing the current value) and then does an atomic replace, iff the current value hasn't been replaced. The original idea for CS was that storage locations requiring simple atomic update could be done w/o implementing a blocking lock bracketing the operation (aka incrementing/decrementing counters, management of control blocks on LIFO, push/pop chains, etc).

to implement a CS locking scheme, adopt the convention that the unlocked "state" is zero .. so the code first zeros a register ... and to be realy fancy load the "thread id" of the current process (assuming never equal to zero; tcb address or whatever) into the replacement register and then execute compare&swap (doing a atomic replace, iff the storage location value is currently zero). This is logically equivalent to TS (although in TS the storage location "unlocked state" is implicitly zero, while it has to be explicitly specified in a CS convention).

This CS approach not only "acquires" the lock if nobody currently "owns" the lock ... but acquires ownership using the current thread/process identifier. The CS will fail if the storage location isn't already zero and the code can spin (as possible in the TS case) ... or do whatever else it deems necessary. When the thread/process that "owns" the lock is complete ... it zeros the storage location. There are various kinds of debugging that can be done ... for instance it can double check that it actually still "owns" the lock in question by checking its current thread/process id with the value stored in the lock.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's supercomputers?
Newsgroups: alt.folklore.computers
Date: Mon, 18 Jun 2001 12:51:00 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
From what I gather, VM/CMS (and, for that matter, some DEC systems like TOPS) managed to get the "right" balance between interactive and batch in many people's opinions. I've often seen claims that batch-heavy systems like MVS are a bit of a pain to work on and at the other extreme, even now, UNIX is poorly served with regard to batch processing, leading to the above situation where there's excessive contention for virtual memory and bigger run-queues.

note that the E/B profile of the aggregate workload significantly changed over the years. keeiping a 360/67 100% busy in 60s was totally different than keeping a 3081 100% busy in the early '80s ... as per table i've posted numerous times

https://www.garlic.com/~lynn/93.html#31


system          3.1L            HPO     change
machine         360/67          3081K

mips            .3              14      47*
pageable pages  105             7000    66*
users           80              320     4*
channels        6               24      4*
drums           12meg           72meg   6*
page I/O        150             600     4*
user I/O        100             300     3*
disk arms       45              32      4*?perform.
bytes/arm       29meg           630meg  23*
avg. arm access 60mill          16mill  3.7*
transfer rate   .3meg           3meg    10*
total data      1.2gig         20.1gig  18*

effectively the number of (interactive) users increase comparable to the capacity of the i/o subsystem over a 15 year period, not proportional to the processor increase. various references to E/B ratios had also started out in terms of bytes and mips ... but has since shifted to bits and mips (i.e. again mip rate increases signficantly larger than i/o rate increases).

started in the late '70s ... i noticed that the dynamic adaptive scheduling I was doing was starting to have additional difficulty "scheduling to the bottleneck" (i.e. biasing the fair share scheduling towards the most bottlenecked or constrained resource) because that while the I/O capacity of the system was increasing ... processing power and other resources was increasing much faster.

when I first started highlighting the situation ... the disk division assigned their performance group to refute the contention (i.e. that the relatively system performance of disks had declined by a factor of ten times ... aka disks had increased by four times, system had increased by 40 times, relative system performance of disk had declined by ten times).

after six months or so of work, the performance group concluced that I had actually somewhat understated the relative disk system performance decline and that it was actually worse than 10*.

This gave rise in the late '70s and early '80s to investigating system strategies to better take advantage of what disk characteristics there were (like data transfer rate had increased faster than avg. access, so could the system do a better job of aggregating transfers).

you see it others later work in the late '80s and early '90s like raid and various other strategies.

slightly related postings
https://www.garlic.com/~lynn/2001f.html#58 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001f.html#60 JFSes: are they really needed?

random refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First Workstation

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First Workstation
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jun 2001 13:51:13 GMT
dkanecki writes:
My first workstation was the HP-9000 originally with HP-Pascal 2.1 and two 60 MB Bernoulli drives. The drives were numbered 14-19 and 20-25 for each 60 MB disk.

my first workstation was 360m30 the summer after my sophomore year ... they let me have the machine room from 8am sat. until 8am monday morning. This continued when school started in the fall. During the summer I could get some sleep after being up 48 hrs ... but once school started in the fall, it was take a shower and head off to classes.

Machine room started with a 360m30 and a 709 ... which were then both replaced with a 360m67 (my 2nd workstation ... at least for 48hrs straight on the weekend).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Converting Bitmap images

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Converting Bitmap images
Newsgroups: alt.folklore.computers
Date: Thu, 21 Jun 2001 00:45:15 GMT
Brian Inglis writes:
You just need an IBM 3279C with APA. You chop up or resample the bitmap into (16x16?) character cell chunks (80x32? of those), with each pixel encoded as 3 bit RGB (1 bit/plane), assign each chunk to a character code, replace the bitmap cells with the character code, download the font to the display, then send the character codes to the display. I think SIO to the terminal address (009/015?) with CCW 9? will do that part easily.

you are letting VM/CMS show thru ... normally device 009/015 was the "machine" operator's console ... i.e. 1052-7, 3215, etc.

VM/CMS would take any terminal (2741, tty, 327x, etc) and map into the CMS virtual address at 009/015. It would translate "3215" commands and simulate them on whatever the real device/terminal was. It was also possible to directly drive the real-terminal hardware features (say in the case of a 3279C) ... using the real terminal address.

One of the vm/cms demos for 3279c was the red-crested(?) monkey picture (from scientific american?). 3279 developed a reputation for the "lightning" flash that occurred on the screen when a new font was being downloaded.

only slightly related
https://www.garlic.com/~lynn/2001f.html#57

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

mail with lrecl >80

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: Thu, 21 Jun 2001 07:13:44 -0600
Subject: re: mail with lrecl >80
note that 821 & 822 have beenr eecently replaced with 2821 & 2822 (simple mail transfer protocol & internet message format)

ref (recent RFCs)
https://www.garlic.com/~lynn/rfcietff.htm

from rfc2822 ....
2.1.1. Line Length Limits

There are two limits that this standard places on the number of characters in a line. Each line of characters MUST be no more than 998 characters, and SHOULD be no more than 78 characters, excluding the CRLF.

The 998 character limit is due to limitations in many implementations which send, receive, or store Internet Message Format messages that simply cannot handle more than 998 characters on a line. Receiving implementations would do well to handle an arbitrarily large number of characters in a line for robustness sake. However, there are so many implementations which (in compliance with the transport requirements of [RFC2821]) do not accept messages containing more than 1000 character including the CR and LF per line, it is important for implementations not to create such messages.

The more conservative 78 character recommendation is to accommodate the many implementations of user interfaces that display these messages which may truncate, or disastrously wrap, the display of more than 78 characters per line, in spite of the fact that such implementations are non-conformant to the intent of this specification (and that of [RFC2821] if they actually cause information to be lost). Again, even though this limitation is put on messages, it is encumbant upon implementations which display messages to handle an arbitrarily large number of characters in a line (certainly at least up to the 998 character limit) for the sake of robustness.


"Russell, Don" writes:
FWIW... A common theme I come across in RFCs is "producers should not..." and "clients should tolerate..."

So, when it comes to this sort of thing, mail producers SHOULD NOT create records longer than 80 characters (or 1024 I forget the current limit) but mail readers SHOULD TOLERATE longer records.

I guess it comes down to whether you want to be CERTAIN your mail can be read by all clients.. but maybe all the clients you're concerned with are new enough that they support all this stuff properly (that is the NEWER RFCs)

I'd say, if it works.. leave well enough alone. :-)

Don Russell


commodity storage servers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: commodity storage servers
Newsgroups: comp.arch.storage,comp.arch
Date: Thu, 21 Jun 2001 19:04:02 GMT
James Cownie writes:
Surely something like the IBM 3850 MSS (Mass Storage System) which was announced in 1974 was a virtual subsystem storage offering.

It presented a number of virtual 3330 disk drives which were staged from pairs of tape data cartridges in a robotic tape system as required, and then the data was put back again afterwards.

From the IBM Glossary :- 3850: The "honeycomb" cell structure of the IBM 3850 Mass Storage System, introduced in 1974, stored small cartridges containing spools of magnetic tape. Each spool could store 50 million characters of information, and up to 472 billion characters could be economically filed in one 3850 system for online computer use.

For some pictures, see
http://www.columbia.edu/cu/computinghistory/mss.html


some software systems from the 70s/80s were (at least)

IBM's HSM on MVS platform

LANL on MVS platform that was productised by general atomics as DataTree

LLNL that was initially productised by general atomics on unix platfrom as UniTree (i.e. uni ... for unix)

NCAR on MVS platform that for awhile that had spin-off as Mesa Archival productising it.

All of the above to some degree or another contributed to various aspects of the IEEE MSS standard.

LANL also drove the HiPPI standards as sort-of a commodity "cray channel" (while LLNL drove FCS standards as a fiber version of a copper non-blocking switch they had from Ancor).

To some extent LANL, LLNL, & NCAR all drove 3rd party transfer aspects of HiPPI standard (i.e. software would manage control as to what was on disk where .... and provide permissions for "client" supercomputers & others to directly transfer data from large disk arrays via HiPPI switch).

random refs:
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/98.html#58 Reliability and SMPs
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001d.html#63 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#65 Pentium 4 Prefetch engine?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM mainframe reference online?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM mainframe reference online?
Newsgroups: comp.arch
Date: Sat, 23 Jun 2001 06:07:56 GMT
forkazoo2@aol.com (Will R) writes:
Anyhow, thank you for your time and bandwidth, feel free to follow up here, or email me privately, as you prefer. It doesn't make much difference to me. Again, thank you.

some random refs:
https://www.garlic.com/~lynn/subpubkey.html#mainframe
https://www.garlic.com/~lynn/subindx2.html#mainframe
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#360pcm
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/subtopic.html#disk
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/2001f.html#0
https://www.garlic.com/~lynn/2001f.html#2
https://www.garlic.com/~lynn/2001f.html#22
https://www.garlic.com/~lynn/2001f.html#23
https://www.garlic.com/~lynn/2001f.html#33
https://www.garlic.com/~lynn/2001f.html#41

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Q: Merced a flop or not?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q: Merced a flop or not?
Newsgroups: comp.arch
Date: Sat, 23 Jun 2001 14:35:12 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Oh, yes, indeed. But my point is that there were known, better and earlier approaches. Terje Mathisen describes exactly how our systems people made it possible to support 100 active users on an IBM 370/165 using MVT and TSO. And there were lots of designs of how that approach could be extended to GUI interfaces.

slightly related thead involving 360/67, 168s, 3081s, etc
https://www.garlic.com/~lynn/2001f.html#47
https://www.garlic.com/~lynn/2001f.html#48
https://www.garlic.com/~lynn/2001f.html#49

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Sat, 23 Jun 2001 16:40:22 GMT
Randy Hudson writes:
In theory, if setting the value back to "cleared" isn't atomic, the waiting process could detect the semaphore's availability and proceed to acquire it, and have the original process re-clear it as part of the non-atomic clearing process.

For example, suppose the semaphore word were the last word of a page, and the clearing process was clearing it with an MVC that was 5 bytes long, also clearing the first byte of the next page. A page fault would interrupt the MVC, but since the MVC is not specified as atomic, the zeroed value might be available long enough for the waiting process to believe it had acquired the semaphore. When the second page becomes available, the MVC would re-run, and re-clear that semaphore, leaving the critical code unprotected.

I'm not sure if the S/390 architecture would protect against this particular scenario, but in theory, the clearing needs to be done in an atomic fashion. On S/390, relatively few memory-update instructions are guaranteed to be atomic (though many are, in practice), so using CS is safest.


two possible conventions for CS use

1) updating counters, pointers, etc. ... which can only be done by atomic operations

2) locking conventions that are typically zero/non-zero

The issue here is can a non-atomic instruction be used to clear a "lock" to zero?

The issue would be a spinning atomic instruction see the results of a storage update by a non-atomic instruction, believe the lock to be "free" (because it has been set to zero), change it to non-zero, and then have the non-atomic instruction wipe out the recently set non-zero value.

Note that MVC is a not a partially executable instruction (like MVCL, at least in all the 360 & 370 hardware manuals that I dealt with)) ... it is supposedly has to either be able to completely execute or not execute (i.e. if the ending address would cause a page fault then that has to be pretested before starting the instruction).

Now, normal MVC shouldn't be able to cause the problem referenced (i.e. it couldn't set the word to zero, have an atomic instruction spinning on "zero" change the value to non-zero, and then the MVC rerun and wipe out the value set by the atomic CS instruction) ... the general instruction retry facility didn't preclude such events (at least in 370 hardware manuals that I dealt with) ... aka because of various fault conditions leading to instruction retry ... any of the non-atomic instructions could result in a storage location being changed multiple times (and other processor operations wouldn't necessarily be locked out).

So, at least at the time of the introduction of CS on 370 ... non-atomic instructions were defined to not result in the effect described ... but (at least) instruction retry of non-atomic instructions could result in the condition.

Now, 370/125 when initially shipped had a bug that was just the opposite (to the mvc crossing page boundary). The MVCL microcode did the ending address pretest (like done for a MVC) and didn't initiate the instruction at all if the ending-address check failed (i.e. the long instructions were suppose to incrementally execute a byte at a time as well as be interruptable). The 370/125 MVCL microcode had to be retrofitted in the field to correctly correspond to the definition of MVCL.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Sat, 23 Jun 2001 16:52:50 GMT
Anne & Lynn Wheeler writes:
So, at least at the time of the introduction of CS on 370 ... non-atomic instructions were defined to not result in the effect described ... but (at least) instruction retry of non-atomic instructions could result in the condition.

modulo the scenerio like MVCL where most of the non-zero value was already zero except for some leading bytes/bits ... the leading bits became zero (via something like MVCL), which satisified the CS requirements, the value replaced, and then the instruction proceeded to zero the remaining bits/bytes (wipeing out portion of the value set by CS).

To the extent the CS convention had at least a 1) non-zero bit in the low-order bit and 2) checked for zero it shouldn't be a problem (except under the instruction retry scenerio). It could be a problem in my original scenerio of using an address (rather than having specific non-zero bit pattern) ... which could randomly have any amount of zero bits in the lower order byte positions.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

commodity storage servers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: commodity storage servers
Newsgroups: comp.arch.storage,comp.arch
Date: Sun, 24 Jun 2001 19:44:42 GMT
"Stephen Fuld" writes:
I've been trying to respond to your original request as to why "something like this" has not been done before, but I am having trouble doing that because your definition of "this" keeps changing. At first I thought you were talking about a multi disk "brick" file server where you would have lots of them and that they would cooperate across a network for redundancy, directory issues, etc. Then, in another post you talked about the NASD effort. Now you are talking about a volume/block server. It is hard to respond as to why "this" hasn't been successfull as the reasons are different for each of the different "this"es. (I'm trying to be responsive and I'm not upset or anything, but you need to have a more definite idea of what you want to do before you can expect to get such market oriented information.)

I think we may have some different ideas of where the "market" for such a system is, and that also effects why it hasn't been successfull. For example, a very different kind and level of effort is required if the market is people like the national labs versus the Fortune 2000 versus mid dized businesses, etc.

I am sure you can tell by the quantity (if not quality) of my responses, that this is an area I am interesting in, but it is hard to provide specific responses with such vague proposals.


in the late '80s there was a project involving cambridge science center and LA science center (along with some involvement of JPL/CIT) called datacube (sort of take off on ncube ... but for data).

The project was canceled ... but some CIT people got the rights and they attempted to take it to market as something called Redi. In the early '90s they had done some amount of work in the video-on-demand market ... which never appeared to came to fruition.

quicky overview (from '91)
ReDI can be configured with RAID-n and/or mirroring

- same machine with multiple disks (mirror or RAID) - two machines with multiple non-shared disks (mirror or RAID) mirror optionally at remote site - multiple machines with multiple shared disks (mirror or RAID) - multiple machines with multiple non-shared disks (RAID) RAID distributed across multiple locations


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Simulation Question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Simulation Question
Newsgroups: comp.arch
Date: Sun, 24 Jun 2001 20:07:21 GMT
"Peter Gray" writes:
Hello. I was wondering if someone could tell me if such a thing as a holistic computer system simulator exists. By 'holistic' I mean software which simulates all aspects of a computer system - all major hardware components; all major software components (including Operating System and Instruction Set functionality), as well as a simulated working environment (i.e. user processes etc). Ideally, the simulator should be configurable to allow the simulation of many different types of architecture and sub-components, as well as various scheduling algorithms and even experimental Instruction Sets etc. My investigation has led me to such things as SimOS and Simics, but nothing designed using a configurable holistic approach.

In all fairness, the holistic computer system simulator would be a major project and I'm beginning to doubt that previous research along these lines has ever been performed. However, I will try to find a group or organization who may have - at some stage - considered it. Failing that, I will simply embark upon the project alone.

Any help or assistance anyone could offer with my search/research would be greatly appreciated. Regards, Peter Gray


during the early '70s, cambridge science center did a lot of work on computer performance, performance analysis, and performance modeling (some of it evolving into the early work in capacity planning).

one of the efforts was an analytical simulation written in APL that got huge amount of calibration data (in some cases years of performance monitoring across large numbers of different kinds of systems and workloads).

This got deployed on the HONE system as a performance configurator that allowed salesman and customers to ask what-if questions regarding changing hardware & workload configurations ... i.e. memory, processor, disks, I/O, etc and saw wide-spread deployment around the world.

Recently I ran into a small company that acquired rights to descendent of this application around '90 and they did an APL->C conversion and has a relatively successful consulting business in Europe and US analysing all sorts of characteristics across a wide-range of different vendor hardware platforms.

... however, while it goes down level of things like cache design, memory bus opertaion, physical I/O operation, disk characteristics, etc ... it doesn't actually deal with individual instruction characteristics.

random refs
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Mon, 25 Jun 2001 15:53:49 GMT
bmanry@us.oracle.com (Bill Manry) writes:
In the proposed design only the holder of a held lock can update it, so the only benefit of using CS to clear it is that CS includes the so-called checkpoint serialization function, the need for which is doubtful in this situation. (If desired, it can be done separately as a 'BCR 15,0' instruction.) S/390 operand consistency rules mean either a ST or a 4-byte MVC of zeroes is sufficiently atomic for this purpose. Note that the lockword must be aligned because it is the target of a CS when the lock is being acquired.

Besides, if you use CS to clear it, there is the nagging little problem of whether (and what) to code for the following conditional branch. ;-)

I'm not suggesting that the proposed spin lockword is a good design, by the way. Just commenting on the suggestion that CS was required for clearing the lock...it isn't.


In the 370 time-frame there was issue of instruction retry (for instructions other than CS) regarding atomic operation (aka including cross-cache invalidation signals and serialization protocols to the memory location). I don't know how s/390 instruction retry handles atomic consistency (i.e. if it attempts to zero something, gets an error, but before the error is handled, a CS operation believes it is zero and updates the location, and then instruction retry zeros it again).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Mon, 25 Jun 2001 18:57:41 GMT
Wild Bill writes:
Yeah! that's like a "smart bomb" issueing a DET (detonate) instruction and falling through to the next...

the original suggestion is that it would be a possible mechanism for recognizing possible failures/corruption. there is always the problem that if you are checking for some failure mode ... what to do about it if you stumble into it (as opposed to going innocently along and letting it bite at some random later time in some random manner).

the trivial case is have an all ones lock-word and checking that it is still all ones before releasing. the more complex case ... as per original message ... os to make the lock-word something like the TCB-address and checking that the task/thread releasing the lock is the task/thread that actually owns the lock ... aka checking for zeros or ones just means that the situation is caught that it was released incorrectly ... but not yet re-obtained; but won't recognize if it was released incorrectly and then the lock was obtained by some other process.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Tue, 26 Jun 2001 15:39:26 GMT
Anne & Lynn Wheeler writes:
the original suggestion is that it would be a possible mechanism for recognizing possible failures/corruption. there is always the problem that if you are checking for some failure mode ... what to do about it if you stumble into it (as opposed to going innocently along and letting it bite at some random later time in some random manner).

the trivial case is have an all ones lock-word and checking that it is still all ones before releasing. the more complex case ... as per original message ... os to make the lock-word something like the TCB-address and checking that the task/thread releasing the lock is the task/thread that actually owns the lock ... aka checking for zeros or ones just means that the situation is caught that it was released incorrectly ... but not yet re-obtained; but won't recognize if it was released incorrectly and then the lock was obtained by some other process.


in part this is one of my themes about the difference between applications and services ... to turn an application into a service might require writing 4-10 times as much code as in the base application.

talk i gave at dependable computing conference
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/schedule.html
https://www.garlic.com/~lynn/hdcctalk.zip

random refs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/2001d.html#70
https://www.garlic.com/~lynn/2001d.html#73
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Tue, 26 Jun 2001 19:07:10 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
As I remember it, in the late 360/early 370 timeframe there was work on instruction retry. If an instruction failed, maybe due to a hardware problem, it would get retried, and possibly succeed. The wrong data may have been written to storage by the first try. This could result in failure of the WAIT/POST interlock, however that interlock worked.

As instruction retry would be a hardware problem, and WAIT/POST an OS problem, it may have taken a while to connect the two.


remember the problem is whether or not the intermediate results are visible in a multiprocessor environment .... had to have at least one other processor looping/testing the value in WAIT/POST at the point that an intermediate value appeared which then got corrected by instruction retry.

worse yet, the 2nd processer seeing an incorrect intermediate value, updated the ECB which might get overlayed/wipedout by the instruction retry.

The "or" and "and" instructions are especially bad since they have to fetch the storage location, update it, and store it back w/o atomic interlocks (any storage updates that happened to sneak in between the fetch and store would evaporate).

Before fine-grain locking ... most code had TS spin-locks on whole applications (or even the whole kernel) ... so the probabilities of problem appearing got exremely small.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FREE X.509 Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FREE X.509 Certificates
Newsgroups: comp.security.firewalls,comp.security.ssh,comp.lang.java.security,alt.security.announce
Date: Wed, 27 Jun 2001 00:05:34 GMT
russfink@hotmail.com (Russ Fink) writes:
From the website:

'No other hassles ! "they" make you fax a drivers license, or other verifications...'

The purpose of an X.509 certificate is to bind a public/private keypair to an individual. Since digital signatures can carry authority on behalf of the user (e.g., commit a purchase, authorize a transaction), it is vitally important that the community and the user have faith in the methods of authentication used to verify the individual.


note one of the problems for consumers and the general public with x.509 identity certificates is the unnecessary proliferation of privacy information (normally contained in such a certificate).

For instance, the EU privacy direction to make retail purchases as anonymous as cash ... rules out names on plastic payment cards, names on the payment cards magstripe as well as identity information in electronic transactions (that might be envisioned as a use for x.509 identity certificates).

there are privacy rules and guidelines perculating at various federal and state levels which would raise similar privacy concerns in the US (implementations that unnecessarily append privacy information to every transaction).

One EU approach has been to only include something like an account number in a certificate (say in a banking application), however it is trivial to show that such a certificate for use in financial applications is redundant and superfluous (and an otherwise unnecessary waste of bandwidth) for online financial transactions. In the EU case with relying-party-only certificates, the financial institutions acts as the RA & CA, registering the public key, manufacturing a certificate, storing the original of the certificate in the account record, and then returning a copy of the certificate to the key-owner. Given online transaction, the certificate copy contains only an account number, and the transaction references the account where the original of the certificate is store, it is pointless for the key-owner to return a copy of the certificate on every financial transaction to the financial institution that maintains the original of the certificate.

random refs:
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HMC . . . does anyone out there like it ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HMC . . . does anyone out there like it ?
Newsgroups: bit.listserv.ibm-main
Date: Wed, 27 Jun 2001 03:26:43 GMT
Rick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
Well, I can say something nice. 20+ years ago I worked with their communications controllers, 1270's I think, and enjoyed the H... out of it. They were programmable controllers and the programming was fairly easy. We managed to get a level of flexibility that was sadly lacking in anyone else's controllers, like dynamically setting port speed and ASCII-EBCDIC translations. We supported a wide variety of ASYNC terminals and could identify the terminal from the first user prompt.

I've been blamed as an undergraduate for starting the 360 PCM control unit business ... because the 2702 I was programming didn't do what I thot it would.

Cambridge had done a good job with CP/67 doing dynamic terminal recognition for 1052, 2740, and 2741 ... trying various sequences and then setting the 2702 SAD command as appropriate. When I was adding TTY/ascii support, i tried to extend the standard dynamic terminal recognition to TTY (i.e. have single rotary dial number for all terminal types). It all worked when testing ... could dial in tty, 1052, 2741 and it dynamically handled everything.

That was until the CE told me that they had taken short-cuts on the 2702 implementation ... while it was possible to switch the line-scanner associated with a line using the SAD command ... they had hardwired the oscillator ... so speed couldn't really switch.

So we decided to build our own controller out of a minicomputer that would stobe incoming signal raise/drop and determine initial bit speed. We also built our own wire-wrapped channel attachment card ... which thru some amount of debugging (we didn't really know at the time that nobody else had ever done this outside of ibm). one of the early bugs was realizing that ibm line-scanner convention was storing leading bit in low-order position, while tty/ascii was leading bit was in high-order posting (in byte) ... aka tty/ascii data arriving in 360 memory from ibm line-scanner was bit-reversed within byte (and our initial pass at a controller had the tty/ascii line-scanner accepting leading bit into high-order bit position) ... ibm ascii translate tables would handle the bit-reversal problem introduced by the ibm line-scanners).

... and voila ... we get blamed for originating the 360 pcm controller business.

random ref:
https://www.garlic.com/~lynn/submain.html#360pcm

I don't know who went off and did the memorex communication controllers ... but one of the san jose engineers that worked on the datacell went off to start the memorex pcm disk business.

MIT story about a feature in my TTY support that caused system to crash 27 times in single day

http://www.multicians.org/thvv/360-67.html

turns out I was doing one byte arithmetic (not in the controller, this is in the kernel) and MIT had made a change to support a TTY plotter with input/output far exceeding 255 ... but hadn't gone thru and changed the one byte arithmetic.

The comparison that CP/67 could restart in a couple minutes while it took Multics an hour to restart after a crash was one reason that Multics rewrote the filesystem.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FREE X.509 Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FREE X.509 Certificates
Newsgroups: comp.security.firewalls,comp.security.ssh,comp.lang.java.security,alt.security.announce
Date: Wed, 27 Jun 2001 14:31:40 GMT
russfink@hotmail.com (Russ Fink) writes:
Well, it may seem pointless to return a certificate to the issuer, but this is just how the protocols work - if a message is signed with a private key, the certificate usually follows the signed data around. It is up to the receiver to verify through issuer signatures and trust hierarchies whether it believes the data in the certificate to be authentic. Back to your first point, if a network architecture won't benefit from X.509 certificates, then it shouldn't use them (e.g., why use certs for a customer to log onto an ATM machine) - but, in many cases, the organization(s) running the network can realize a cost savings by using certificates in that it enables them to decentralize their authentication schemes, communicate with new entities, or transmit data on public networks.

The financial industry's electronic payment standard for all account-based transactions doesn't require certificates (not just internet, not just point-of-sale, not just credit, not just debit, but all account-based electronic transactions) ... but can rely on the public key registered in the account record (and this also eliminates the significant liability and privacy issues introduced by X.509 identity certificates).

misc refs:
http://www.x9.org/ ... US financial standards
http://www.tc68.org/ ... international/iso financial standards
http://www.iso.ch/iso/en/stdsdevelopment/tclist/TechnicalCommitteeDetailPage.TechnicalCommitteeDetail?TC=68

the standard document
https://web.archive.org/web/20011215145141/http://webstore.ansi.org/ansidocstore/product.asp?sku=DSTU+X9.59-2000

also NACHA has completed AADS debit/ATM pilot and looking to move to production deployment... ref:
https://www.garlic.com/~lynn/nacharfi.htm

one of the problems that arise is having end-to-end authentication in transactions and flowing certificates over the existing financial networks. In such an environment, typical transaction payload size is on the order of 60-100 bytes.

Appending a signature to such a transaction doesn't represent too onerous payload bloat ... but try also appending a 4k-12k byte certificate to such a transaction, then taslk about cost savings.

Some of the techniques used with "certificate" protocols in the past have the digital signature verification being done at an internet boundary along with signature and certificate truncation, however that represents a severe security exposure because end-to-end authentication (and correspondingly, end-to-end security) is totally thrown out the door. Also, since these have been "internet-only" solutions, they don't address 99% of the rest of the financial transactions that go on in the world.

So one of the things is to find some trade-off that preserves end-to-end authentication (and security) without incuring the tremendous payload bloat introduced by appending these humongous certificates on each and every transaction.

This is one of the reasons that X9F was looking at certificate compression techniques for transactions in the financial & payments industry. However, effectively their compression standards basically looked at eliminating fields from the certificate that reasonably can be expected to already be in possesion of the relying-party. AADS work was able to show that aggresively applying such techniques that all fields were in the possesion of the relying-party and therefor the certificate could be compressed to zero bytes.

mapping of X9.59 to ISO 8583 (payment network standard, credit, debit, atm, etc).
https://www.garlic.com/~lynn/8583flow.htm

One of the significant comparisons have been between European use of chips and electronic purse to do offline, decentralized transactions at point-of-sale; and the corresponding implementation in the US with things like online, gift-cards, cash-cards, etc (in the US, you see them somewhat like phone cards ... on j-hooks at check-out stands, walmart, blockbuster, sears, kmart, also the gas cards lke shell, etc).

In the US, the cost of online transactions along with the increased level of security and integrity because of being online is significantly better than the EU deployments of the offline, decentralized model. With telco deregulation in the rest of the world, along with the associated dropping costs and the transition to pervasive online with things like the internet around the world; the trade-offs for the rest of the world is starting to be comparable to the US model.

general references to the Account Authority Digital Signature model (as opposed to the Certification Authority Digital Signature model)
https://www.garlic.com/~lynn/

discussion of the existing use of SSL server certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

discussions of client authentication and privacy:
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#privacy

some related discussions on electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

The thread between risk management and information security
https://www.garlic.com/~lynn/aepay3.htm#riskm
https://www.garlic.com/~lynn/aepay3.htm#riskaads

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/


previous, next, index - home