List of Archived Posts

2004 Newsgroup Postings (10/18 - 11/02)

RISCs too close to hardware?
Shipwrecks
Multi-processor timing issue
Shipwrecks
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
XML: The good, the bad, and the ugly
XML: The good, the bad, and the ugly
RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?
First single chip 32-bit microprocessor
Shipwrecks
Shipwrecks
Shipwrecks
Shipwrecks
PCIe as a chip-to-chip interconnect
Shipwrecks
Is Fast Path headed nowhere?
Is Fast Path headed nowhere?
First single chip 32-bit microprocessor
PCIe as a chip-to-chip interconnect
Shipwrecks
Shipwrecks
RISCs too close to hardware?
Shipwrecks
Shipwrecks (dynamic linking)
passing of iverson
RS/6000 in Sysplex Environment
RS/6000 in Sysplex Environment
RS/6000 in Sysplex Environment
Multi-processor timing issue
Longest Thread Ever
Internet turns 35 today
Shipwrecks
Shipwrecks
ARP Caching
Shipwrecks
First single chip 32-bit microprocessor
Alive and kicking?
Integer types for 128-bit addressing
CKD Disks?
CKD Disks?
Integer types for 128-bit addressing
CKD Disks?
Integer types for 128-bit addressing

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 13:56:36 -0600
"mike" writes:
This thread is heading toward confusion!

CICS is a "transaction processing system". The PDP-10 and VAX-VMS are "conversational time sharing systems".

It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with VAX-VMS and PDP-10.

While CICS is more like a character green screen version of Apache or an application server like WebSphere.


cps was conversational programming system ... that was done by the boston programming center ... and ran on os/360. they had also done special microcode assist for the 360/50 that significantly improved cps performance. recent cps related posting
https://www.garlic.com/~lynn/2004m.html#54

apl\360 ... well was apl\360 ... random apl along with some number of hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

hone was a major internal cms\apl based timesharing service that supported all the field, sales, and marketing people worldwide.

there were a lot of subsystem/monitors that ran on os/360 ... providing their own contained environment, terminal support, tasking, scheduling, allocation, swapping, etc. while some of the commands differed between cics and say cps ... their system implementation details were remarkably similar.

there was vmpc ... which was done for vs1 ... it was originally going to be called pco (personal computing option) ... but they ran into an acronym conflict with a political party in europe. pco was supposedly going to be a cms-killer (as opposed to a enhanced crje like tso).

cp67 & cms was done at the science center, 4th floor, 545 tech sq in the mid-60s
https://www.garlic.com/~lynn/subtopic.html#545tech

some of the people from ctss had gone to the 5th floor to work on multics, and others went to the 4th floor and the science center. the boston programming center (and cps) was on the 3rd floor (until the group was absorbed by the rapidly expanding vm/cms group ... after cp67 had morphed into vm370).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 14:24:55 -0600
Tom Van Vleck writes:
Perhaps they're not paying enough. (I guess that's just the definition of "enough.") If they doubled the salary, would they find applicants? Tripled?

there were similar tale from some gov. labs. in the late 80s and early 90s ... they had reqs. out for mainframe support people that had been out for a year and couldn't backfill. they believed they were in competition with banks, insurance companies, large commericial institutions, etc. there were some places telling their staff that they were converting to non-mainframe systems ... not because they were necessarily better, but because they couldn't attract the staff to support the systems. one gov. installation supposedly had a ceremony retiring their mainframe system on the day their last mainframe support person retired.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 15:02:51 -0600
Terje Mathisen writes:
What's wrong with simply dividing by 4,096,000,000 using unsigned division? (If unsigned div isn't available, then it's a good idea to do a shift first!)

i was a brash young kid when i was doing resource manager and dmkstp .... and I guess wanted to show off on some stuff being able to do better than the hardware.

standard 360/370 hardware you can multiple two 31bit signed integers ... getting a doulbe word, 63bit signed integer. so i wanted to multiple a double word by a 31bit unsigned integer ... coming up with a 95bit unsigned integer ... and dividing a 95bit unsigned integer by a 31bit unsigned integer resulting in a 63bit unsigned integer. as a result, there was some relatively convoluted code in dmkstp implementing multiplying a double word by a full word (resutling in a triple word) ... and dividing a trible word by a full word (resulting in a double word).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 15:19:18 -0600
Tom Van Vleck writes:
As far as "marketing," IBM committed and un-committed to TSS several times, in ways that hurt its chances of success; they pushed non-g-p systems like TSO and CRJE as add-ons to batch systems; and they starved, dumped on, and ignored the CP-67 line of development until the customers forced them to support it.

a little clarification (for those that haven't yet read melinda's history) ... standard 360 didn't have virtual memory hardware.

the science center first did cp/40 on a 360/40 with custom hardware implementing virtual memory. eventually ibm shipped the 360/67 ... which was very close to being a 360/65 with virtual memory hardware added in. the custom 360/40 virtual memory hardware and what was shipped in 360/67 had numerous differences.

there was an official operating system under development for the 360/67, tss/360 (time-sharing system, which was unrelated to the port of cp/40 to 360/67 as cp/67; also unrelated to the time-sharing option, aka tso, done for mvt).

tss/360 had some significant birthing issues .... a mix-mode fortan edit, compile and execute benchmark with four 2741 terminal users ... had multi-second trivial response ... while cp/cms "release one" running essentially the identical workload on the identical hardware with 30 users had subsecond trivial response (and i've claimed that i extensively rewrote major portions of this cp kernel as an undergraduate and got it up to 75-80 users).

tss/360 was announced, unannounced, decommuted, etc. It did manage to survive with limited number of customers as tss/370 on 370 virtual memory machines. There is some possibility that the largest tss/370 deployment was inside at&t ... where a unix environment was interfaced to low-level tss/370 kernel (supervisor) calls.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 15:39:25 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
VM/CMS was written precisely because MVS/TSO was so ghastly - but the Wheelers know a thousand times more about that than I do. There were several MVS sub-systems that were designed for interactive use, most of which came out of academia, such as MTS (Michigan), GUTS (Gothenburg) and Phoenix (Cambridge). The last was the one most designed for remote use as, by the time we got an IBM, Cambridge was ALREADY a remote access site.

cp/67 first made it into customer (360/67) sites because tss/360 was so ghastly ... predating mvt/tso.

i was undergraduate at a university that was one of the tss/360 sites and had an installed 360/67. however, tss/360 was having a hard time coming to fruition ... so the machine ran in 360/65 (real memory) mode most of the time with os/360.

Cambridge had finished the port of cp/40 from custom modified 360/40 (with custom virtual memory hardware) to 360/67 as cp/67. It was running at the science center and then was installed on the 360/67 out at lincoln labs. The last week in jan, 1968, three people from the science center came out to the university and installed cp/67 (the univ. had somewhat gotten tired waiting for tss/360 to come to reasonable fruition). I did a lot of performance and feature work on cp/67 and cms as an undergraduate ... including adding tty/ascii terminal support. In 69, i did a motification to HASP ... adding 2741 & tty terminal support ... as well as implementing cms editor syntax for a conversational remote job entry function ... on an MVT relase 18 base. I think TSO finally showed up in MVT release 20.something period ... and I thot that the terminal CRJE hack that I had done on HASP-base was better than the TSO offering.

In addition to the cp/67 alternative to tss/360 ... UofMich also did MTS (michigan terminal system) for the 360/67 (360/67 was only 360 model with virtual memory hardware support).

370s initially came out with no virtual memory hardware support ... but eventually virtual memory (and virtual memory operating systems) were announced for all 370 models. tss/360 (as tss/370), cp/67 (as vm/370), and MTS were all ported to virtual memory 370.

we have two different cambridge's involved here. lots of (cambridge) science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Melinda's history is also a good source for a lot of this
http://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 15:50:44 -0600
"Tom Linden" writes:
Gothenburg Univ Timesharing System was mostly written in PL/I and was used as late as 1994 (?) commercially b Information Resources Inc. in Chicago. I believe they wrote their own TP system and Database and at that time collected sales data from 4000 supermarkets across the US

tss/360 was supposed to have been a time-sharing system ... TSO ... while a mnemonic for time-sharing operation was really conversational or online option ... as opposed to time-sharing option.

a lot of the conversation/online systems (whether or not they were time-sharing) that were built on os/360 platform tended to have their own subsystem infrastructures ... in many cases having substitute feature/function for standard os/360 facilities.

one of the issues for cics online system was that standard os/360 scheduling and file open/close facilities were extremely heavy weight ... not suitable for online/conversation activities. cics would do its own subsystem tasking/scheduling. cics also tended to do (os/360) operating system file opens at startup and keep them open for the duration of cics (with conversational tasks doing internal cics file open/closes). In addition to (people) terminals, CICS systems were also used to drive a lot of other kinds of terminals; banking terminals, ATM machines, cable TV head-end & settop boxes, etc.

The other thing that falls into this category is now called TPF (transaction processing system) ... which is a totally independent system. It started out life as its own operating system ... and it was called ACP (airline control program) before the name change to TPF. As ACP it drove many of the largest online airline related systems. It somewhat got its name change as other industries picked up for various operational use (other parts of travel industry, some of the financial transaction oriented systems, etc).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 15:53:07 -0600
Tom Van Vleck writes:
Yup, also there was BRUIN, at Brown University. I connected to that once on a guest account, and couldn't get out. LOGOUT -- no. LOGOFF -- no. QUIT -- no. BYE -- no. END -- no. GOODBYE -- no. ADIOS -- no. Tried a bunch more, no luck. Finally asked somebody. CANCEL.

the original (cp/67) cms had a "BRUIN" command that had been ported to CMS ... i.e. somewhat like the port of apl\360 to cms\apl ... remove all the multi-tasking and system infrastructure features ... leaving just the user command interface stuff.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 18:20:26 -0600
Peter Flass writes:
I've worked a fair amount with both, CMS first and VMS later. If we're talking ease-of-use I'd vote for VMS. Of course CMS is also lots older.

the cp/67 development group was spun off the science center (about 12-15 people) ... moved down to the 3rd floor, absorbing the boston programming center. it morphed into vm/370 (or vm/cms), outgrew the 3rd floor and moved out to the old sbc bldg. in burlington mall.

in the mid-70s the vm/cms group were told that the company was stopping product work on vm, closing the burlington mall location and everybody was to move to pok to work on the vmtool (an internal only, virtual machine based system supporting mvs/xa development).

several people didn't agree and left ... going to work (among other places) for dec on vms.

customers complained quite a bit ... and a vm/370 development group was instituted in endicott ... while several of the people that went to POK campaigned behind the scenes for releasing the internal-only vmtool as vm/xa.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 20:11:58 -0600
"del cecchi" writes:
You should have seen the cool graphics a guy in Rochester got out of a 3279, wonderful color waveforms from the circuit simulator. Some sort of trick with downloading character sets that were really little chunks of the picture or something like that. I thought I had gone to heaven after years of looking at waveforms plotted on a line printer.

slightly related ... post about multi-user, distributed, space-war game:
https://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 21:50:04 -0600
"Stephen Fuld" writes:
I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo. And while CICS was originally a "greem screen" application, there is now software that allows taking advantage of the processing power and high bandwidth to the screen of a PC. Things like field editing can be moved to the PC.

tuxedo was transaction monitor .... while cics was transaction processing subsystem .... i got half dozen tuxedo books down in boxes someplace. i believe tuxedo was spun off to bea(?).

there was also camelot ... out of cmu ... along with mach, andrew widgets, andrew filesystem, etc. IBM had pumped something like $50m into CMU for these projects about the same time that IBM & DEC each funded Project Athena at MIT to the tune of something like $25m each.

some of this was spun out of cmu as Transarc (i believe also heavily funded by ibm ... and then bought outright by ibm).

cics was much more like transaction processing in any of the rdbms systems (loading transaction code, scheduling transaction code, actually dispatching the code for executiong) ... except it started out interfacing to bdam files (as opposed to having full blown dbms).

cics beta test at the university was on mvt system on 360 machine ... predating screens ... using 2741 and 1050s.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 22:57:13 -0600
Brian Inglis writes:
And dual 4361s ended up as the service processors for the 309X? series, providing LPAR functionality.

service processor for 3090 started out being a single 4331 ... effectively a upgrade from the uc.5 service processor in the 308x.

it had a stabilized vm/370 & cms release 6 system with a number of custom modifications ... like being able to "read" a bunch of service ports in the 3090. the menu screens that had been custom stuff in the uc.5 ... became ios3270 screens for the 3090.

the 4331 morphed into a 4361 and then dual-4361s ... for availability.

part of the issue was long standing requirement that field engineer in the field could bootstrap hardware problem diagnostic ... starting with very few diagnostic facilities like a scope. starting with the 3081 ... the machine was no longer scope'able. so a machine that was scopable (a service processor) was put in ... that had a whole lot of diagnostic interfaces to everywhere in the machine. the field engineer could bootstrap diagnostics of the service processor ... and then use the service processor to diagnose the real machine. somewhere along the way .. it was decided to replicate the 4361 ... so to take a failed 4361 out of the critical path for diagnosing the 3090.

since the vm/370 release 6 would be in use long past its product lifetime, the engineering group had to put together its own software support team. I contributed some amount of stuff for this custom system ... including a problem analysis and diagnostic tool for analysing and diagnosing vm/370 software problems. random dumprx postings:
https://www.garlic.com/~lynn/submain.html#dumprx

The virtual machine microcode assist (sie) was enhanced to do logical partitioning (LPARs) of the hardware (w/o needing a vm kernel) called PR/SM (processor resource/systems manager). the service processor was used to setup PR/SM configuration ... but didn't actually execute PR/SM functions.

this is standard 4361
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4361.html

and standard 3090 (which had a pair of 4361s packaged inside) ...
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

3090 also offered vector processing option 3090 came with "extended storage" ... it was electronic memory for paging with wide, high-speed bus ... and was accessed by (processor) synchronous page move instructions (the theory was that the latency was too long for normal memory ... but with wide-enuf bus ... a 4k move could go pretty fast). When HiPPI support was added to the 3090 ... the standard I/O interface wasn't fast enuf ... so a special interface was cut into the expanded storage bus to provide HiPPI support.

article from the palo alto science center on fortran support for 3090 vector facility
http://domino.research.ibm.com/tchjr/journalindex.nsf/4ac37cf0bdc4dd6a85256547004d47e1/1383665bc8da3f1c85256bfa0067f655?OpenDocument

article out of BNL about 3090 mentioning vector facility and extended storage.
http://www.ccd.bnl.gov/LINK.bnl/1996/May96/histcom5.html

it has line about "IBM sites", such as SLAC, FERMILAB, and CERN ... ... for a time, I was involved in monthly meetings at SLAC and there was lots of application and software sharing between these sister lab "IBM sites". it also mentions some of the issues around eventual migration from 3090 to computational intensive risc workstations (of course the hot new thing is the GRID, i happened to give a talk at a GRID conference this summer).

for some topic drift, one could trace the invention of GML at the cambridge science center, its integration into the CMS document formater "SCRIPT", its wide deployment and standardization as SGML ... and the eventual morphing at CERN into HTML. SLAC then has the distinction of putting up the first web server in the US (on its vm/cms system). a couple recent posts on the subject
https://www.garlic.com/~lynn/2004l.html#72
earlier post about slac's original web pages
https://www.garlic.com/~lynn/2004d.html#53
lots of random gml/sqml posts:
https://www.garlic.com/~lynn/submain.html#sgml

the bnl.gov article also mentions installing sql/ds ... which is the tech transfer from sjr of the original rdbms effort, system/r to endicott: random posts on system/r and some mention about sql/ds tech transfer
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

XML: The good, the bad, and the ugly

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XML: The good, the bad, and the ugly
Newsgroups: comp.databases.theory
Date: Tue, 19 Oct 2004 07:50:57 -0600
"Laconic2" writes:
The universal file format is about data communication. It's a damned good idea. Literally! If you look back in Genesis 11, you'll see that the lack of of a universal language was what condemned the execution of what probably was a pretty good architectural plan.

(self-describing) universal file format is about making it usable by different programs (other than the one that created the file) ... it is analogous to the dbms concept ... which also includes the concept of making it usable *concurrently* by different programs.

data communication implies usable by different programs at different locations ... however not having data communication doesn't preclude having different programs at the same location.

(self-describing) universal file format is also helpful in time as well as space ... i.e. it would have been helpful in the y2k remediation efforts ... explicitly tagging years ... as opposed to plowing thru (30+ year old) source (that might not still exist) to try and guess what things might be (one or) two digit years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

XML: The good, the bad, and the ugly

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XML: The good, the bad, and the ugly
Newsgroups: comp.databases.theory
Date: Tue, 19 Oct 2004 08:58:24 -0600
"Laconic2" writes:
Maybe I'm misusing the term "communication". But I think of communication as transferring data (or information) between one "locus" and another. That could mean moving it from one continent to another, or from one chip on a board to another chip on the same board, or from one gate in a chip to another gate in the same chip. It could also mean moving it from one person to another.

I also think of two programs, one that writes a file and one that reads a file, to have "moved" the data from one "locus" to another. I even think of the messages that fly around inside an object oriented system as "communication" between the objects.

It's not my purpose to use standard terms in a non-standard way. So, if there really is a standard meaning for the term "communication" that precludes the above usage, then I'm in the market for another term. But so far, I haven't found the other term, or a definitive rule that says I shouldn't use "communication" this way.


if it is communicate ... as in computer communication ... then it tends to be moving data around.

if it is communicate as in convey information .... then it is back to the original invention/characteristic of gml about self-describing information ... which could be used for determining format presentation. there were lots of document formating ... but they tended to markup the document with explicit formating information; part of the genuis of gml ... was that it markedup the document with information about the document elements ... and allowed the formating rules for those elements to be independent of the tagging of the document elements. it somewhat opened the way for being able to use the document markup information for things other than document presentation/formatting.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 19 Oct 2004 09:56:14 -0600
glen herrmannsfeldt writes:
I always wanted to know the difference between CRBE and CRJE.

the code in HASP that drove 2780 bisynch was called Remote Job Entry ... there literally were decks of cards ... and they were frequently referred to job deck (of cards).

The CRJE stuff i did in hasp involved deleting the 2780 code and replacing it with 2741 & TTY terminal support ... and slipping in an editor that supported the CMS edit syntax.

doing search engine on remote job entry turns up "about 17,500" entries; remote batch entry turns up "about 467" entires.

some of the remote batch entry entries say something about remote submission of batches of data ... so conversational remote batch entry might possibly have some slight semantic conflict between conversational and batch.

specifying both 2780 and rje to search engine turns up "about 989" entries (down from "17,500" for just rje).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch
Date: Tue, 19 Oct 2004 11:18:29 -0600
"Stephen Fuld" writes:
Well, the 4300 series was at the low end of the line, so lots of the channel functiopns were preformed by the same engines. But the higher end systems of that era had separate channel engines.

there was this interesting problem of how fast various channels could re-act ... more latency as opposed to raw, flat-out bandwidth.

vm formated ckd disks in psuedo fixed-block architecture ... actually from its start in the mid-60s. on 3330s disks there was this interesting problem of having a request for a record on one track ... and also a queued request for a "logical" sequential record on a different track (on the same cylinder). the trick was for the channel (and rest of infrastructure) to execute the switch track command(s) and pickup the next record in a single revolution (w/o the start of record having rotated past the heads before the start of the data transfer operation ... resulting in an extra full revolution).

the 168 outboard channels, the 148 channels, and the 4341 channels all did this much better & consistently than the 158 channels. the 158 had integrated channels, where the processor engin was time-shared between the 370 microcode function and the channel microcode function.

moving to the 303x machines ... they took a 158 engine ... stripped away the 370 microcode (leaving just the channel microcode) and called it a channel director. all of the 303x processors used channel directors for their channels. all of the 303x processors had the channel command latency characteristics of the 158 ... while the 4341 had better channel command latency characteristics.

the 4341 was about a one mip machine that you could get with 16mbytes of memory and six channels. the 3033 was a 4.5 mip machine that you could get with 16mbytes of memory and sixteen channels. however six fully configured 4341s were in about the same price range as a 3033 (meaning you could get an aggregate of 6mips, 96mbytes of memory, and 36 channels). there was some internal tension at the time about clusters of 4341 being extremely competitive with the high-end product.

now, if you are talking about the 4331 ... that was a much slower machine.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 19 Oct 2004 14:09:17 -0600
re:
https://www.garlic.com/~lynn/2004n.html#14 360 longevity

and for a little more drift ... the 3090 had sort of the opposite problem with i/o command latency/overhead processing.

i had been making these comments about the relative system performance of disks had declined by a factor of 10 times between 360 and 3081.
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

the disk division didn't like what i was saying and assigned their performance group to refute the statements. after spending something like 3 months looking at the issues ... they eventually concluded that i had slightly understated the severity of the problem. this then turned into a share presentation on configuring disks to improve system thruput.

i had also been wandering around the disk engineering and product test labs in bldg. 14 & bldg. 15. they had these "test cells" that contained development hardware .. that got "stand-alone" test time connection to some processor for testing. at the time, if they tried connecting development hardware (single testcell) to a machine running a standard MVS operating system, the claim was that the system MTBF was 15 minutes .... so they had a number of processors that were serialy scheduled for dedicated (stand-alone) development hardware testing.

i took this as somewhat of a challenge to write an operating system bullet proof i/o subsystem that would never crash &/or hang the system. this was eventually deployed across the disk development and product test processors in bldg. 14 & 15 .... and even eventually migrated to some of the other disk division plant sites. they would typically be able to do possibly half dozen test cells concurrently on a processor w/o crashing and/or hanging.

bldg. 15, product test lab ... had a 3033 for testing with new disk products and added a 3830 controller with 16 3330 disk drives for engineering timesharing use ... on a machine that had been previously been dedicated to stand-alone disk hardware testing (note that testcell operation was fairly i/o intensive ... but tended to only use one percent of the processor ... or less).
https://www.garlic.com/~lynn/subtopic.html#disk

so one monday i came in ... and i have an irate call from bldg. 15 wanting to know what i had done to their system over the weekend; system thruput and response had gone all to pieces and was horrible. I said that I hadn't changed their system at all over the weekend and asked them what changes they had made. Of course, they said they hadn't made any changes. Well, it turned out that they had replaced their 3830 controller with a new development 3880 (disk) controller.

Having isolated what the change was ... it was time to do a lot of hardware analysis. The 3830 disk controller had a fast horizontal microcode engine that handle commands and disk transfers. As part of some policy(?) .. the 3880 had a relatively slow speed (JIB-prime) veritical microcode processor (for doing command decode and execution) with some dedicated hardware for actual data transfer handling up to 3mbyte/sec (which would soon be seen with the new 3380 disks). The problem was that elapsed time for typical disk operations were taking a couple milliseconds longer with 3880 controller compared to same exact operation using 3830 controller (slow command decode and processing). To compensate, they re-orged how some stuff was done inside the 3880 and would signal operation complete to the processor as soon as data finished transfer ... and all sorts of internal disk controller task completion proceeding in parallel/asynchronously after signaling completion (as opposed to waiting until the 3880 had actually completed everything before signaling completion).

They claimed that in the product performance acceptance tests ... that this change allowed 3880 to meet specifications. However, it turned out that the performance acceptance test was done with a two disk drive VS1 operating system ... running single thread operation. In this scenario ... the 3880 signaled completion and the VS1 system went on its way getting other stuff done (overlapped with the 3880 actually finishing the operation). The VS1 operating system would then get around to eventually putting together the next operation ... and by that time the 3880 would be done with its internal business.

What had happened (that Monday morning) in real live operation with 16 3330 drives and lots of concurrent activity was that there tended to frequently be queued operations waiting for (disks on) the controller. The 3880 would signal operation complete ... and immediately be hit with start of a new (queued) operation. Since the 3880 was busy ... it would signal controller busy (SM+BUSY) back to the processor ... and the system would have to requeue the operation and go off and do something else. Since the controller had signaled SM+BUSY ... it was now forced to schedule a controller free interrupt (CUE) to tell the processor that it was ready to do the next operation. The VS1 system performance test never saw the increase in latency because it had other stuff to do getting ready for the next operation ... and it never experienced the significant increase in pathlength caused by the requeuing because of the SM+BUSY and the subsequent additional interrupt (the CUE).

So it was now back to the drawing board for the 3880 .... to actualy try and do something about the extra latency (rather than trying to hide it and hope it was overlapped with something else the processor needed to do). Fortunately this was still six months before first customer ship for the 3880s ... so they had a little breathing room in which to do something.

So the 3090 group in POK had been doing various kinds of capacity planning ... some recent posts on this subject
https://www.garlic.com/~lynn/submain.html#bench

and balanced configuration thruput design stuff. The problem was that even after fixing everything that could be possibly fixed in the 3880 ... there was going to be significant more channel busy per operation (compared to the same operations with 3830 controller).

Now, this is where my memory is a little vaque. What i seem to recollect was that the typical 3090 configuration had been assumed to be six TCMs and 96 channels. All this stuff with 3880 channel busy met that the (customer's) disk farm had to be spread across a larger number of channels in order to achieve the same/expected thruput; with typical configurations now needing an extra 32 channels (to compensate for the increased 3880 disk controller channel busy time), which in turn required adding an extra (7th) TCM. There were jokes about taking the cost of the extra TCM out of the disk division's revenue.

shorter version of the same tale:
https://www.garlic.com/~lynn/2002b.html#3 Microcode? (& index searching)

various historical dates:
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 19 Oct 2004 14:35:48 -0600
here is cics history site (possibly more than you ever want to know)
http://objectz.com/columnists/tscott/part1.html

it mentions IMS as a partial competitor to CICS ... and mentions some amount about IMS & DL/1

from above ...
IBM's IMS was a partial competitor to CICS. It consists of two products IMS/DB, a hierarchical database manager, and IBM/TS, a transaction processing system (formerly referred to as a data communications system, IMS/DC). The application programming interface for IMS was called DL/I. With IMS being developed in San Jose, California, it is easy to see how there could be more of a competitive attitude between there and Hursley than a cooperative one. Legend within IBM relates that when the CICS team approached the IMS team to work on an interface between the two products, the IMS team wanted no part of it, saying they already had a transaction manager in IMS/DC. The CICS team went ahead alone to build the interface. The first version, made available in 1974 with the first virtual storage version of CICS (Aylmer-Hall, 1999), worked by making IMS/DB think it was being invoked from a batch program. First-hand experience of this author revealed some of the problems of an interface designed without cooperation from both sides. When a problem in the interface caused the CICS system to ABEND (Abnormally End), the application team might call for IBM help from a CICS specialist or from an IMS specialist. The CICS specialist would trace the problem to the IMS interface and stop, saying he knew nothing of IMS. If an IMS specialist was called, he would look at the system dump and say that he could not find the IMS control blocks that he needed to get started, because it was not really a batch application. Getting both specialists at once to solve one problem proved impossible, so the team developing this CICS-IMS application, especially this author, learned a lot about reading CICS system dumps.

... snip ...

when my wife did her time in POK responsible for loosely-coupled (aka cluster) architecture ... she developed Peer-Coupled Shared Data .. and spent some time working with IMS getting it adopted for IMS hot standby ... misc
https://www.garlic.com/~lynn/submain.html#shareddata

the claim could be made that it was also the foundation for (the much later) parallel sysplex ... parallel sysplex home page:
http://www-1.ibm.com/servers/eserver/zseries/pso/

of course we did ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
and related
https://www.garlic.com/~lynn/submain.html#available

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 20 Oct 2004 08:43:33 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
IBM has always been very strong on internal competition :-)

numerous times when it came to trying to kill off vm/cms ... past post that mentions a pco (vs/pc) gimmick
https://www.garlic.com/~lynn/2001f.html#49

it mentions a couple people using a model to calculate projected pco/vspc performance (since it wasn't running yet) and nearly the whole cms group involved in running mandated/required comparable real benchmarks (upwards of six months time). when they finally got real pco running ... it turned out that pco was something like ten times slower than the simulated numbers were claiming.

also mentioned is the CERN tso/cms comparison tests .. and the CERN report presented to share. internal corporate copies of the report were quickly stamped confidential - restricted ... available on a strickly need-to-know basis only (for instance, you probably didn't want the people marketing tso to know about it).

one could possibly tie the evolution of heavy CMS use at CERN to the subsequent invention of HTML and the web.

random posts on gml/sgml, its invention at the science center in '69; incorporation of gml support in cms document processing, etc
https://www.garlic.com/~lynn/submain.html#sgml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 20 Oct 2004 09:16:41 -0600
for a little dirft ... there was this joke about working four shifts;

first shift in bldg 28 ... on various stuff like
https://www.garlic.com/~lynn/submain.html#systemr

2nd shift in bldgs 14/15
https://www.garlic.com/~lynn/subtopic.html#disk

3rd shift in bldg 90 doing some stuff for ims group
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and weekends/4th shift up at hone
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Newsgroups: alt.folklore.computers
Subject: Re: RISCs too close to hardware?
Date: Thu, 21 Oct 2004 13:05:06 -0700
mwojcik@newsguy.com (Michael Wojcik) wrote in message news:<cl88p20220a@news3.newsguy.com>...
When I was working for IBM TCS, a couple of floors above the Cambridge Scientific Center labs where Lynn worked (this was around the time that he was doing HA/CMP), we used the Andrew stuff extensively. The TCS (Technical Computing Services) group had formerly been part of ACIS, the Academic Computing group, which had been heavily involved in both the Andrew and Project Athena efforts.

ACIS wrote the Cambridge Window Manager for X, for example, which I don't believe I ever saw used outside IBM. It was an unusual window manager - it only supported tiled, rather than overlapping, windows, arranged in columns. Windows were "minimized" by removing everything except the title bar, like projection screens being rolled up.

At any rate, we used AFS for our network filesystem. It offered much better performance and recovery than NFS, and it had other nice features like ACLs and Kerberos integration. (Since Kerberos came out of MIT, I don't know whether CMU put the Kerberos hooks into AFS, or if that was done by MIT or ACIS. I know I added Kerberos hooks to some stuff while I was there - one of the X login clients, for example.) We also used many of the Andrew widget-based X clients, such as a fancy system performance monitor (its name escapes me at the moment) which could be configured with all sorts of display widgets (counters, graphs, needle gauges) for various system values (load, disk activity, etc).


before they closed it, the science center moved from 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

down to 101(?) main street ... right outside the front door where the T goes back underground coming off the bridge.

when we started ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we subcontracted a bunch of the implementation out to a number of former science center &/or athena people. they grew rapidly and after the science center closed ... they moved into the vacated quarters

for some dirft .... the first time i visited science center (back when i was an undergraduate), i stayed at the sonesta (next to lotus ... but back then it was called chart house(?) ... and there were no other bldgs even close .... lechmere was a big warehouse looking facility in middle of big paved lot) ... anyway ... one of the business trips associated with HA/CMP ... was staying at sonesta and was walking down to 101 main street in the morning ... and as I was walking by the TMC bldg ... there was somebody leaning a ladder against the side of the bldg ... and prying the letters off the bldg ... i stopped and watched him pry all the letters off the bldg.

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: RISCs too close to hardware?
Date: Thu, 21 Oct 2004 13:30:34 -0700
mwojcik@newsguy.com (Michael Wojcik) wrote in message news:<cl88p20220a@news3.newsguy.com>...
When I was working for IBM TCS, a couple of floors above the Cambridge Scientific Center labs where Lynn worked (this was around the time that he was doing HA/CMP), we used the Andrew stuff extensively. The TCS (Technical Computing Services) group had formerly been part of ACIS, the Academic Computing group, which had been heavily involved in both the Andrew and Project Athena efforts.

... oh yes ... when ACIS was formed they were initially allocated something like $200m-$300m to donate it for funding univerisity projects .... its a hard job ... but somebody has to do it. mit (project athena) and cmu got a big chunk of it ... but there were a number of other universities that got a lot also.

First single chip 32-bit microprocessor

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: First single chip 32-bit microprocessor
Date: Fri, 22 Oct 2004 08:36:45 -0700
"John" wrote in message news:<cl6s72$n2b$1@hercules.btinternet.com>...
Also, in such discussions, what counts as 'first'? Looking at the transistor as an example, first meant a lab demonstration and some hand-written scribbles in a notebook. So I suppose the corollary for VLSI would be having one chip from a wafer that at some freezing cold temperature passed a few tests before dying. Another definition would be the first on sale (and available).

blue iliad in the early 80s was going to be first 32bit 801 ... it got thru first couple sample runs. some people that knew about it and/or even worked on it went to other companies ... possibly amd 29k, hp snake, mips; some number of others.

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Fri, 22 Oct 2004 10:19:02 -0700
Brian Inglis wrote in message news:<v3qdn09f3t4d2pn2g40dmvoeua3u3a1lcu@4ax.com>...
On their batch systems. Some of their other OS groups got it: VM gave very good, consistently fast response up until something got overloaded; it may have been mainly thanks to Lynn's work on fast paths for common situations in CP, and the focus on shortening code path lengths, which seemed to have an effect on other groups. I'm not sure if there were other influences on him or the org which produced this focus.

And VM gave every user an almost completely isolated virtual system configuration, not just a false impression of the whole system, that belonged to them, which they could do with as they wished. ISTR [hack]{at}[ibm](dot)[com] saying somewhere recently that he'd been and was still doing his work under his own custom OS.


on cp/67, i cut a lot of pathlengths significantly ... some by factor of 100 times, i also did initial version of resource tracking policy scheduler ... with default resource tracking policy being fair share, redid the page replacement algorithm ... and did ordered seek gueueing for 2314 and chained requests for 2301.

the pathlength stuff significantly increased the cpu capacity of the system, the resource tracking policy scheduling significantly improved & made more consistent the interactive response, the ordered seek queueing and the chain request stuff for paging drum (2301) increased the paging capacity of the system, and the page replacement algorithm both significantly reduced pathlength and also improved the efficiency of real storage usage.

the situation as you moved into the late 70s ... was that it was easier & easier to saturate the i/o infrastructure .... creating queuing delays. this is the recent reference to drasticly falling disk relative disk performance
https://www.garlic.com/~lynn/2004n.html#15 Re: 360 longevity, was RISCs too close to hardware

referencing
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

the claim is that if the i/o infrastructure had kept pace with the rest of the system ... then the cp/67 workload of 80 users would have scaled-up by a factor of 50 times to something like 4000 users (on 3081k) rather than 300-400 users.

the problem was that various I/O loading characteristics could start showing non-linear increases and saturation in i/o queueing delays ... leading to appearance of significant response time increases.

in the late 70s, i quite a bit of work on tracking i/o request elapsed times, service times, queueing delay times, incorporating it into fair share resource calculations. i also did some tracking of active page sets by users dropping from queue ... and then would do a batch request in an attempt to pull all pages back into memory with single i/o ... rather than individual page faults.

also, in the rewrite of the i/o subsystem to make it bullet proof for the testcell use in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

I rewrite the multi-channel pathing support to help load balancing ... although this was a mixed-bleasing ... again in the recent post about the relative system thrutput delince of disks ... it goes on to discuss the overhead problems with 3880 disk controller (and 3090 having to add an extra TCM ... for extra channels to distributed the 3880 increased channel busy across more channels .... in any case, the 3880 had a very, very significant overhead increase if two succesive I/O requests came in on different channel interfaces (compared to having the same two succesive i/o reuqests coming in on the same channel interface). In any case, there were several points in the operational envelope where channel load balancing involving spreading i/o requests across different channel interfaces ... significantly degraded total system thruput (compared to no channel load balancing ... and trying to maintain controller channel affinity).

random other past posts on the 67/3081k system issues:
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?

A later implementation along these lines was the "big pages" done for both mvs/xa and vm/xa ... forcing block paging of full 3380 disk tracks in single operation.

lots of past big page posts:
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!

Shipwrecks

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Fri, 22 Oct 2004 11:00:38 -0700
i had done lot of the cp/67 work as undergraduate at the university. i had also done a lot of (batch) work on os/360. i gave a talk on both cp/67 work and the os/360 work at fall '68 share meeting.

os/360 (because of perceived severe real-storage constraints) had a design point of executables up into lots of small programs that were serially loaded from disk. normal installation procedure could scatter these programs all around the disk surface; a careful regulated/controlled installation procedure could significantly reduce the avg. arm motion and elapsed time (for some university workloads could increase thruput by a factor of three times).

random refs:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002e.html#62 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#53 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#17 Seriously long term storage
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#39 spool
https://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004k.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec

Shipwrecks

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Fri, 22 Oct 2004 11:25:04 -0700
jmfbahciv@aol.com wrote in message news:<cYudnbLcZZH-K-rcRVn-vw@rcn.net>...
Of course. Back then IBM was very, very good at batch processing huge data without interruption. <--interruption is a key word. Timesharing is all about interruptions.

either really good pre-emptive scheduling ... or system configured with execessive resources. in the 60s ... "timesharing", "interactive" systems were typically configured to run at something like 50% cpu utilization (or even less).

one of the efforts with cp/67 was to improve both the i/o management and the pre-emptive scheduling ... so that the processor could be run with a mix-mode workload (both batch background and interactive tasks) at 100% CPU utilization and still get subsecond response.

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Fri, 22 Oct 2004 11:48:11 -0700
jmfbahciv@aol.com wrote in message news:<cYudnbLcZZH-K-rcRVn-vw@rcn.net>...
Our VM was to make the addressing range larger to the usermode program. It had nothing to do with virtual machines. VM back then meant virtual memory. I don't think I ever heard the term virtual machine until the KL (1975) days.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had first built their own virtual memory hardware on 360/40 and implemented cp/40 ... which was both virtual memory system and virtual machine system ... somewhat waiting for the official 360 with virtual memory hardware to show up. 360 models 60, 62, and 70 had been announced. these were all machines with 1mic memory. before these machines shipped, the memory subsystem was upgrades to 750nsecs and re-announced as the 65, 67, and 75. The 360/67 was effectively (at least the single processor version) was effectively a 360/65 with virtual memory hardware added.

the official operating system for the 360/67 was TSS/360 (time-sharing system); as recently mentioned, tss/360 had significant birthing issues ... and a number of customers (primarily universities) that had installed 360/67s (in anticipation of tss/360) got tired. UofMich wrote their own virtual memory, time-sharing system called MTS (but using many of the applications from os/360). Cambridge ported the cp/40 virtual machine (& virtual memory) system to 360/67, renaming it cp/67. One of the claims for doing cp/40 and cp/67 was that prior virtual memory efforts had possibly gotten things wrong.

the installation of cp/67 (outside of cambridge) was at Lincoln Labs sometime in 1967. The next installation of cp/67 was at the university (that I was at) the last week in january, 1968.

CP/67 utilized the virtual memory hardware on 360/67 ... and I did a lot of work on the virtual memory stuff
https://www.garlic.com/~lynn/subtopic.html#wsclock

and also supported virtual machines ... allowing os/360 to be run in a virtual machine.
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress

PCIe as a chip-to-chip interconnect

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: PCIe as a chip-to-chip interconnect
Date: Fri, 22 Oct 2004 15:21:46 -0700
"Stephen Fuld" wrote in message news:<mD1ed.18910$OD2.3189@bgtnsc05-news.ops.worldnet.att.net>...
I never programmed under MVS at that level, but if I had an assembler program that ran under OS, I could run that same object module under MVT. What is MVS doing "behind the scenes" to change the pointers I have set up to do the I/O under OS? What exactly are you sating MVS does? i.e. does it allocate the buffers? If so, then they must be pinned when you do the I/O, or you risk a page fault in the middle of your I/O transfer.

in the os/360 paradigm ... the application code ... typically actually some (file access) library routine ... running in the application region created a (I/O) channel program (sequence of CCWs). Then it would execute a supervisor/kernel (excp) call. The kernel would do some prelim ... like if it was a disk request ... prefix the channel program with arm positioning operation ... and then directly invoke the application region I/O channel program.

In the initial move of MVT to virtual memory ... it was called VS2/SVS ... single virtual storage ... it was as if MVT had 16mbytes of real storage ... with some underlying stub-code that mapped the MVT 16mbytes (single virtual address space) to typically much smaller real storage.

The initial prototype for VS2/SVS involved taking MVT, crafting the stub virtual address space code on the side and borrowing "CCWTRANS" from CP/67. The issue is that the channel program CCWs all use real address for transfers. The problem is that the application code generating the CCW sequence still believes it is generating real addresses in its channel program CCWs ... when they are all actually virtual addresses. Now when the application program issued the (EXCP) kernel call ... instead of directly pointing at the application channel program code .... the code called the (CP/67) CCWTRANS routine. This routine created a "shadow" copy of the user's channel program CCWs .... checked each of the virtual addresses ... as appropriate made sure the associated virtual page(s) was resident & pinned in real storage and translated the virtual address (from the application channel program CCWs) to the appropriate real address (in the "shadow" channel program CCWs). The actual I/O that was initiated was the "translated" shadow channel program CCWs ... no longer the original application channel program CCWs (a major issue was that real I/O is done with real addresses, and applications only had virtual addresses to deal with).

This VS2/SVS system ... looked like a single virtual address space ... with the old MVT kernel and all applications/task occupying that single virtual address space. The transition from SVS (single virtual storage) to MVS (multiple virtual storage) ... was effectively giving each application its own virtual address space. This structure actually had the (same) MVS kernel (image) occupying 8mbytes of every application virtual address space ... with 7mbytes (of the 16mbytes address space) available for an application.

There is one mbyte missing. The problem was that in MVT and SVS ... everything occupied the same address space ... and there was heavy reliance on pointer passing paradigm. This included numerous "sub-system" function that were used by applications ... but were not actually part of the kernel. Come MVS ... the application would be making a call passing a pointer to some application data .... which would eventually pass thru the kernel and then into a completely different address space (where the subsystem function was working). The problem now was that the pointer ... was to an area in a totally different address space. A work around was created called the (1mbyte) common segment ... that appears in all virtual address spaces ... where data could be stuffed away ... and pointers passed ... and they would be usable ... regardless of which virtual address space was executing.

The next problem was as MVS systems grew and got more complex ... there were more and more subsystems that required common segment space. Very quickly, some installations found themselves with 4mbyte common (segment) areas .... leaving only a maximum of 4mbytes (out of 16mbytes) in each virtual address space for application program.

Note that requirement continued in MVS for the application channel program ccws to real executing CCWs remained the same ... that the virtual address space channel program CCWs had to be copied to shadows CCWs and the virtual addresses translated to real addresses (and the associated virtual pages pinned) before starting the I/O operation

There were some subsystems that were given V=R regions .... where memory regions were mapped to real storage and the application subsystem code generated channel program CCWs that had real address pointing to areas that had fixed real storage allocation. These channel program CCWs could be treated specially and not have to be translated ... but executed directly (like things were back on real memory MVT systems).

Note dual-address space was introduced in the 3033 .... because the problem with the common (segment) area was becoming so severe ... aka some installations might soon not have any virtual address space left to actually run applications. With dual-address space .... a subsystem would be entered with a secondary address space control register ... set to the original application program. It then had special instructions that would use the address pointer to fetch/store data from secondary (application) virtual address space ... rather than the primary (subsystem) virtual address space.

Then came generalized access registers and program calls. The original os/360 characteristic had lots of calls to various library functions just by loading a register pointer and "branch and link" to the routine. Later releases of MVS started moving various of this stuff into their own address space. You could do a kernel call to effect an address space switch .... to get to the target library code ... but the kernel call represented a very large pathlength increase (compared to a BALR instruction). The solution was access registers and program call instruction. This is basically a (protected) table of calleable routines setup for an application. The application can specify an entry in the table and do a program call instruction. The hardware uses information in the protected program call table to swizzle the virtual address space control registers and pass control to the called routine (w/o the overhead of a kernel call).

random past refs to dual-address space, access registers, program call, etc
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001d.html#28 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001d.html#30 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#43 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters

random past refs to VS2/SVS and/or AOS (original SVS prototype using CP/67 CCWTRANS):
https://www.garlic.com/~lynn/93.html#18 location 50
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/95.html#2 Why is there only VM/370?
https://www.garlic.com/~lynn/97.html#23 Kernel swapping itself out ?
https://www.garlic.com/~lynn/97.html#26 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000e.html#37 FW: NEW IBM MAINFRAMES / OS / ETC.(HOT OFF THE PRESS)
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2001l.html#38 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003.html#51 Top Gun
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004b.html#60 Paging
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#63 System/360 40 years old today
https://www.garlic.com/~lynn/2004e.html#35 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#60 Infiniband - practicalities for small clusters

Shipwrecks

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Fri, 22 Oct 2004 18:37:44 -0700
blmblm@myrealbox.com wrote in message news:<2tsui1F20atkeU1@uni-berlin.de
I think someone upthread said "but I still don't get what you mean about the distinction between OS thinking and compiler thinking, especially as it applies to Dijkstra", and your attempt to clarify -- well, I interpreted the "should have signed as /BLAH" comment to mean "still not clear." If you wanted to try one more time to explain what you meant, I know I'd be interested.

My belief (based on mostly-theory knowledge of operating systems, so certainly a "FWIW") is that one of the most intellectually challenging parts of OS-level work is understanding concurrency issues (by which I mean the idea of multiple things happening in-effect-at-the-same-time, implemented using interleaving and context switches and interrupts and, um, "like that"), and Dijkstra's work on semaphores and concurrent algorithms seem to me to qualify him as knowledgeable on this topic, which strikes me as much more "OS thinking" than "compiler thinking".

So if you cared to try one more time to clarify the distinction, and why you think of Dijkstra as a "compiler thinker" ....?


I didn't see so much operating system and compiler ... it was between state operation and problistic operation. a lot of low level system stuff tends to be highly optimization state determination and management. scheduling tends to be much more like operations research and fortran program. for the fair share scheduler (actually genearlized resource policy scheduling ... with the default policy fair share) ... i would tightly switch back and forth between traditional highly optimized state management ... with some fancy assembler language that was more reminiscent of apl or fortran mathematical programming for operations research. random posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

the page replacement stuff appeared to be tightly bound state management stuff .... but it always involved maintaining an objective of some very specific probablistic objectives ... even tho there was no explicit code associated with the probablistic objectives ... just the careful ordering of the state management stuff (which it makes it somewhat more difficult to understanding that something happens w/o there being any explicit code)
https://www.garlic.com/~lynn/subtopic.html#wsclock

Is Fast Path headed nowhere?

From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Is Fast Path headed nowhere?
Date: Sat, 23 Oct 2004 07:37:28 -0700
dkanter@gmail.com (David Kanter) wrote in message news:<745d25e.0410221317.2501abe6@posting.google.com>...
A while ago, there was some ballyho about IBM's new Fast Path technology:

depends on which fast path you refer to .... when i rewrote major pieces of cp/67 interrupt handling, dispatching, misc. other stuff ... i referred to pieces as fastpath, fast redispatch, fast svc interrupt, etc ... it was optimal special handling for the most common case.

in the late 70s, IMS did some stuff called IMS fast path.

Is Fast Path headed nowhere?

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Is Fast Path headed nowhere?
Date: Sat, 23 Oct 2004 11:24:33 -0700
dkanter@gmail.com (David Kanter) wrote in message news:<745d25e.0410221317.2501abe6@posting.google.com>...
A while ago, there was some ballyho about IBM's new Fast Path technology:

http://www.webpronews.com/it/networksystems/wpn-21-20030512FutureDirectionsTooMuchofaGoodThing.html
http://news.zdnet.com/2100-9584_22-892836.html
http://www.rootvg.net/column_risc.htm

The general idea is spending die-space to implement some basic TCP/IP functions. I have heard nothing about this for POWER5, at all.

So what's the deal? Did Fast Path get canned? Did it get pushed back to POWER5+ (i.e. 90nm POWER5)?

Does anyone have information on this (Del & John, Ahem)?

David Kanter


the original mainframe tcp/ip got about 43kbytes/sec taxing about 100 percent of a 3090 engine. i added rfc 1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

and some turning at cray research was getting 1mbyte/sec thruput (media speed) between a cray and a 4341-clone ... using only a very modest amount of the (4341) processor

a little later ... protocol engines was doing a chip for both tcp offload as well as xtp offload support ... looking to get media thruput with FDDI ... using little of a processor
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

First single chip 32-bit microprocessor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: First single chip 32-bit microprocessor
Date: Sat, 23 Oct 2004 12:18:49 -0700
a little drift from the comp.arch fast path thread
https://www.garlic.com/~lynn/2004n.html#29 Is Fast Path headed nowhere
with one of the references ....
http://www.rootvg.net/column_risc.htm 27 years of ibm risc

note that they don't mention blue iliad.

also ... aix implementation for the pc/rt was outsourced to the company that had done at&t system/III port for pc/ix ... doing the same port to ride on top of "VRM" running on the pc/rt.

the ACIS organization had done

a) bsd to the bare metal of pc/rt and was distributed as "AOS" ... as an alternative to the "AIX" VRM/ATT unix (aos had actually started out as a port of BSD to 370 ... but was then retargeted to the pc/rt).

b) ucla locus port to 370 and ps/2 ... which was distributed as aix/370 and aix/ps2. the locus implementation allowed something of a unix "SAA" strategy between 370 and ps2

misc past 801/etc posts:
https://www.garlic.com/~lynn/subtopic.html#801

misc. past posts on 3tier architecture & middle layer ... with some refs to SAA
https://www.garlic.com/~lynn/subnetwork.html#3tier

and for a whole lot more drift:
https://www.garlic.com/~lynn/95.html#13
and
https://www.garlic.com/~lynn/aadsm5.htm#asrn2

.... the metaware C story for AOS .... there were two primary people behind vs/pascal; they had originally done the implementation in the los gatos vlsi tools group (which actually used quite a bit of metaware technology). one of the people had since left and was head of software development at MIPS. the other was still working on vs/pascal and I spent some time talking to him about getting a C front-end using the vs/pascal backend (370) code generator. I left for a six week speaking tour in Europe ... and when I got back the person had left and joined metaware. about that time, ACIS formed a group to do the BSD port for 370 (and the former head of apl development in stl went up to palo alto to head up the group). they needed a 370 c compiler ... and i suggested they talk to metaware about it. later when the aos was retargeted from 370 to pc/rt ... they keep the same metaware c compiler that they had been using for the 370 port.

random past metaware refs:
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters

PCIe as a chip-to-chip interconnect

From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: PCIe as a chip-to-chip interconnect
Date: Sat, 23 Oct 2004 17:29:01 -0700
"Stephen Fuld" wrote in message news:<6lved.21678$OD2.8192@bgtnsc05-news.ops.worldnet.att.net>...
The key words to our discussion here is that the kernel made sure the pages were "resident and pinned". That was necessary because otherwise many problems could insue due to the buffers being in pages IN USER SPACE (I did the emphasis as you suggested, Nick, but it still doesn't feel right. I meant to emphasize, not to "shout". :-( ) Thus the buffers were pinned during the I/O and the primary difference when using RDMA is that the buffers have to be pinned even when there is no I/O going on. This is the price one has to pay for not paying the overhead of having the OS know about (so it can pin the buffers) the I/O. Yes, it hinders defragmentation of real memory, and reduces effective memory size, since, on average, more memory is taken up by pinned pages. ISTM that is the tradeoff and one might want to make it or not, but it is not an obviously stupid one to make.

Thanks Lynn.

Nick, is this what you were referring to? Or is there something else here.


so there are (at least) two possible gotcha's in the model ... one is that the pages are pinned and the operation is then scheduled ... and then the pages remain pinned until after the whole operation has signaled final completion. another is that on read/input ... the read operation might specify the maximum possible input size (requiring all possible associated virtual pages to be pinned) ... even when the actual input turns out to be much less than the maximum.

oldtime scenario might involve single channel program that would read (or write) a full 3380 cylinder (say 15 tracks times about 40kbytes .... on the order of 150 4k pages).

Shipwrecks

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Sun, 24 Oct 2004 12:12:55 -0700
jmfbahciv@aol.com wrote in message news:<8ZOdnf2zUqL9P-bcRVn-qQ@rcn.net>...
I'm perfectly willing to change my labels. Engineer vs scientist wouldn't have working in my area because we were all labelled engineers. I get an allergy when I am told that a certain style of problem solution is the only way to implement OSes. _IME_ this style of thinking always produced either a disaster if we let the code get out the door or a complete rewrite if we catch it before the code got out the door.

Perhaps I've been instinctively applying some flavor of Boyd's work in order to get things done efficienctly, completely, and as cheaply as possible without compromising the first two.

Now, if the computing biz has evolved such that versatililty (a.k.a. rapid adaption to changing circumstances) is out of the hands of the code we used to call the monitor and in the hands of the compiler biz, there is a danger of precluding extensibility. Extensibility was the backbone of all successful computer products in the past. Is this not true anymore?


at some point they sent my wife off to some high level executive school ... one of the things they gave was myers-briggs personality test ... and how being aware of different types of personalities allows you to adopt how you deal with different people (one of the distinctions was the difference between the engineer type and the scientist type). here is example URL ... although they seem to periodically change their type characterizations
http://www.personalitypathways.com/type_inventory.html

they also had teams playing cooperation vis-a-vis competition games and she did something along these lines ... and nearly brought some grown men to tears
http://www.wired.com/news/culture/0,1284,65317,00.html

Shipwrecks

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Sun, 24 Oct 2004 12:16:18 -0700
... oh, and of course, some of my Boyd references:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: RISCs too close to hardware?
Date: Sun, 24 Oct 2004 16:59:12 -0700
rsteiner@visi.com (Richard Steiner) wrote in message news:<TPedBpHpvCzS092yn@visi.com>...
You might want to specify "IBM mainframe complex" above -- I've been given the very strong impression based on my days at Northwest Airlines (which is both an IBM and a Unisys 2200-series mainframe shop) that the IBM side of life required a consierably larger staff to maintain, both on the systems side and on the applications development/support side.

there was both significant number of vendor people as well as customer people involved in the system care and feeding

there was some presentation someplace ... that initially Amdahl was selling into MTS and VM/370 accounts (many at universities) because of the significant lower dependency on vendor support people (most of which would presumably evaporate if the customer switched to an Amdahl processor).

i got somewhat roped into this from another standpoint. the first thoroughly blue account (large commercial entity with large football fields worth of installed gear) announced that they were going to be the first (true-blue) installation to install Amdahl. I got asked to go live at the customer location as part of a strategy to try and change the customer's mind.

lots of past Amdahl mentions:
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#188 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#48 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000d.html#61 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000f.html#11 Amdahl Exits Mainframe Market
https://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe Market
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
https://www.garlic.com/~lynn/2001b.html#28 So long, comp.arch
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#67 Original S/360 Systems - Models 60,62 70
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001d.html#35 Imitation...
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#19 SIMTICS
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2001l.html#47 five-nines
https://www.garlic.com/~lynn/2001n.html#22 Hercules, OCO, and IBM missing a great opportunity
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#48 flags, procedure calls, opinions
https://www.garlic.com/~lynn/2002e.html#51 IBM 360 definition (Systems Journal)
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002j.html#46 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#36 mainframe
https://www.garlic.com/~lynn/2003.html#37 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003.html#65 Amdahl's VM/PE information/documentation sought
https://www.garlic.com/~lynn/2003c.html#76 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#68 unix
https://www.garlic.com/~lynn/2003e.html#13 unix
https://www.garlic.com/~lynn/2003e.html#15 unix
https://www.garlic.com/~lynn/2003e.html#16 unix
https://www.garlic.com/~lynn/2003e.html#17 unix
https://www.garlic.com/~lynn/2003e.html#18 unix
https://www.garlic.com/~lynn/2003e.html#20 unix
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#3 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003g.html#58 40th Anniversary of IBM System/360
https://www.garlic.com/~lynn/2003h.html#32 IBM system 370
https://www.garlic.com/~lynn/2003h.html#56 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003i.html#3 A Dark Day
https://www.garlic.com/~lynn/2003i.html#4 A Dark Day
https://www.garlic.com/~lynn/2003i.html#6 A Dark Day
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day
https://www.garlic.com/~lynn/2003j.html#54 June 23, 1969: IBM "unbundles" software
https://www.garlic.com/~lynn/2003l.html#11 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2003l.html#31 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#22 foundations of relational theory? - some references for the
https://www.garlic.com/~lynn/2003n.html#24 Good news for SPARC
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004b.html#49 new to mainframe asm
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb. 1964
https://www.garlic.com/~lynn/2004c.html#39 Memory Affinity
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary
https://www.garlic.com/~lynn/2004g.html#28 Most dangerous product the mainframe has ever seen
https://www.garlic.com/~lynn/2004h.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#17 Wars against bad things
https://www.garlic.com/~lynn/2004l.html#51 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2004m.html#56 RISCs too close to hardware?

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Shipwrecks
Date: Sun, 24 Oct 2004 17:07:02 -0700
mwojcik@newsguy.com (Michael Wojcik) wrote in message news:<cl10o10k5p@news4.newsguy.com>...
Even barring problems handling FIN_WAIT_x states and the like, HTTP/1.0's conversation-per-request was a terribly inefficient use of TCP. Besides conversation setup and teardown overhead, TCP's congestion-avoidance mechanisms prevent a conversation from reaching its optimal throughput immediately. A TCP conversation has to be used for a little while before windows are fully open and things are running full speed.

That's why HTTP/1.1 made persistent conversations mandatory for conforming implementations, and the default.


TCP has a minimum 7 packet exchange for reliable session. vmtp (rfc 1045) defined a 5 packet exchange for reliable session. XTP
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

defined a minimum 3 packet exchange for reliable session .... as well as a bunch of other stuff for high-speed networking .... like rate-based pacing and other characteristics that would be conducive for protocol offload.

part of the issue is that there is some actual HTTP traffic that is truely transaction, single-round-trip like.

there is an intrinsic problem with windows that are non-stable in real-world bursty traffic large network.

random past posts on rate-based pacing
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000c.html#27 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#31 The first "internet" companies?
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001c.html#79 Q: ANSI X9.68 certificate format standard
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001m.html#41 Solutions to Man in the Middle attacks?
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#44 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002n.html#25 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2002p.html#31 Western Union data communications?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003o.html#29 Biometric cards will not stop identity fraud
https://www.garlic.com/~lynn/2003p.html#15 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004j.html#46 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#47 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#16 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#17 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#18 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100

Shipwrecks (dynamic linking)

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers,alt.os.multics
Subject: Re: Shipwrecks (dynamic linking)
Date: Mon, 25 Oct 2004 03:48:48 -0700
Olin Sibert wrote in message news:<DPmdnf4kyObTG-HcRVn-2A@rcn.net>...
Another important way (which Tom mentions briefly in point 4 below) that Multics dynamic linking was different from the DLLs of today is that every subroutine had, by design, a two-part name: segment name and entrypoint name. This was completely visible to the callers, providing a primitive (and unenforced) data hiding/object abstraction.

More to the point, however, it allowed link targets to be found at runtime purely by name: the dynamic linker would search for the segment by name (along a search path), then find the entry by name--rather than requiring the caller to know the "DLL name" in in advance as is common today. There were no "stub libraries" or anything like that the linker had to search at compile time to figure out what library contains "getopt()"; instead, it was easy at runtime to find "cu_$arg_ptr" by first finding the "cu_" segment and then finding the "arg_ptr" entry.

This feature meant that if you were unhappy with a particular set of system libraries (like the several subroutines that were used to translate relative to absolute pathnames), you could get your desired behavior simply by putting your version of that particular library at the front of the search list, rather than having to re-build a library containing hundreds of other vaguely related functions.


CMS had a totally different variation on this ... it shared some common heritage with multics ... some of the people that worked on CTSS went to 4th floor 545tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

and some went to 5th floor to work on multics

everything (originally) in cms was a svc 202 call .. which went thru command lookup ... first EXEC (command script'ing file) in various directory (minidisks) searches, then binary executable file in various (minidisk) directory searches, and finally name table of kernel routines (and, oh by the way ... along the way it also checked the abbreviation & synonym table). not only could you replace any kernel routine with a binary version ... you could also replace it with your own command scripting version

passing of iverson

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: passing of iverson
Date: Mon, 25 Oct 2004 17:01:54 -0700
there have been some number of posts this past weekend about ken iverson passing.

at the time cambridge picked up a copy of apl\360 to port to cms ... apl\360 (iverson, falkoff) was down at the phili science center.

apl\360 had interpreter but it also consisted of all the monitor stuff managing multi-tasking under os/360 ... swapping apl workspaces, controlling which workspaces ran, etc. an issue at the time was allowed apl\360 workspaces sizes were on the order of 16k to 32k bytes.

the cambridge port for cms\apl was to strip away everything but the interpreter to run on cms (with cp handling paging, multitasking, etc). One of the major issues was that under cms, typical workspace sizes were now on the order of 512kbytes (rather than 16kbytes) to several megabytes (rather than 32kbytes). The issue with the apl storage/garbage manager were two-fold 1) pathlength overhead of storage management touching all possible storage in the workspace (regardless of program size) and 2) excessive demands on the virtual paging system (of storage management paradigm originally targeted for real storage environment).

apl\360 had been used for a lot of modeling and applications that are typically done today with spread sheets. with the availability of cms\apl and "large" workspaces ... there was a lot of new applications that couldn't be done in the limited apl\360 workspace sizes. The science center
https://www.garlic.com/~lynn/subtopic.html#545tech

besides doing all sorts of science center type stuff ... as well as pushing the envelope on various technologies like networking
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and other stuff like gml, online computing, etc ... also allowed some time-sharing access to the cambridge machine
https://www.garlic.com/~lynn/submain.html#timeshare

this included various students at MIT, Harvard, BU, etc.

However, one of the other organizations that started using the cambridge machine for using cms\apl for business modeling were the corporate hdqtrs business planning people ... who loaded the most sensitive corporate secrets into cms\apl workspaces on the cambridge machine.

Another evolving application in the early '70s was HONE time-sharing service
https://www.garlic.com/~lynn/subtopic.html#hone

... which started out providing various kinds of internal time-sharing services for field, sales, and marketing people. A set of sophisticated services were eventually deployed on cms\apl in support of the field, sales, and marketing people (initially in the US). One of the first HONE datacenters outside of the US was when EMEA (europe, middle east & africa) hdqtrs moved from the US to Paris. One of my first overseas trips after graduating and joining the science center was installing a copy of HONE at the (then) brand new EMEA hdqtrs location in brand new bldg in La Defense (they were still doing the landscaping outside). The next major HONE installation (outside the US) that I got to do, was in Tokyo for IBM Japan (eventually there were large & small HONE datacenter clones all over the world). For hdqtrs like operations, HONE APL apps tended to support not only the field, sales and marketing functions ... but also the (hdqtrs) business planning and forecasting functions.

In the late '70s, all the various US HONE data centers were consolidated in California ... and possibly the largest single system image time-sharing system (at the time, that was primarily offering cms-based apl services) was created. There was front-end that handled load-balancing and fall-over across all the available machines in the complex.

Also by then, the Palo Alto Science Center had came out with APL\CMS replacing Cambridge's CMS\APL. PASC also did the APL microcode performance assist for the 370/145 (giving 370/145 APL applications the approx. performance of non-assisted APL on 370/168).

Also, resolved was a major disagreement that Cambridge caused in doing CMS\APL. In addition to porting APL\360 to CMS\APL and redoing the storage management for virtual memory, paged environment; (the other thing that Cambridge did in CMS\APL) was invent the ability for APL to make system function calls (like doing file read/write). This created a lot of conflict with the APL purists since the semantics of system function calls violated the purity of APL. This was eventually resolved with the invention of APL shared variables in the APL\CMS time-frame ... where shared variables implementations replaced the system function calls that had been created in CMS\APL.

somewhat evolution of APL performance models and benchmarking into capacity planning
https://www.garlic.com/~lynn/submain.html#bench

RS/6000 in Sysplex Environment

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: RS/6000 in Sysplex Environment
Date: Tue, 26 Oct 2004 17:27:57 -0700
ronhawkins@ibm-main.lst (Ron and Jenny Hawkins) wrote in message news:<200410260834.i9Q8YIJk027735@bama.ua.edu>...
Radoslaw,

HACMP (clam) is NOT parallel sysplex - but you knew that already. I'm not sure where Gerry has picked up the idea that RS/6000 can participate in Parallel Sysplex, but AIX has no mechanism with which to participate in a Sysplex with MVS.

Ron


modulo the fact that my wife did Peer-Coupled Shared Data architecture when she did her stint in POK in charge of loosely-coupled architecture ... which can be considered the basis that parallel sysplex is built on.
https://www.garlic.com/~lynn/submain.html#shareddata

we also did the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp

CL&M was the (initially) small group of former science center
https://www.garlic.com/~lynn/subtopic.html#545tech
and project athena people that we outsourced a lot of the implementation to.

C, L, and M are the initials of the prinicples ... sort of carrying on science center tradition ... like compare&swap (CAS are charlie's initials who invented compare&swap)
https://www.garlic.com/~lynn/subtopic.html#smp

and GML (morphed into SGML, HTML, XML, etc; "G", "M", and "L" the initials of the people from the science center that invented GML)
https://www.garlic.com/~lynn/submain.html#sgml

"C" had moved from the science center to gburg in the '60s ... and later headed up advanced interconnect portion of future systems
https://www.garlic.com/~lynn/submain.html#futuresys

and my wife reported to him on advanced interconnect. when FS was aborted, my wife and Moldow had a project to "re-invent" SNA (architecutre) ... and they produced AWP39, peer-to-peer networking (which is reported to have caused some people in Raleigh to go on tranquilizers).

course hsdt didn't make them feel any better, either
https://www.garlic.com/~lynn/subnetwork.html#hsdt

By the time we started ha/cmp .... "C" had moved back to cambridge.

somewhat related & a little drift:
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?

RS/6000 in Sysplex Environment

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: RS/6000 in Sysplex Environment
Date: Wed, 27 Oct 2004 05:51:02 -0700
re:
https://www.garlic.com/~lynn/2004n.html#38

while we were doing ha/cmp ... we coined the terms disaster survivabilty and geographic survivability ...
https://www.garlic.com/~lynn/submain.html#available

and we were asked to write a section in the corporate continuous availability strategy document. note however, both rochester and POK non-concurred with what we had written (as not being able to be met by them ... at least at the time).

for some coomplete (availability) drift ... posts about having totally rewritten i/o subsystem for the disk and product test engineering labs (bldgs. 14 & 15) so that they could operate multiple, concurrent testcells in an operating system environment (instead of serial, stand-alone test) ... at a time when single testcell operation (under MVS) had a MTBF of 15 minutes (for MVS)
https://www.garlic.com/~lynn/subtopic.html#disk

and for even more drift ... tie-in between ha/cmp and electronic commerce
https://www.garlic.com/~lynn/95.html#13
and
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

RS/6000 in Sysplex Environment

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: RS/6000 in Sysplex Environment
Date: Wed, 27 Oct 2004 11:38:44 -0700
Kees.Vernooy@ibm-main.lst (Vernooy, C.P. - SPLXM) wrote in message news:<47EE53EF7017F7428C6C0BB264B1DD2503888293@x1fd014.ex.klm.nl>...
Gee Lynn,

Can you never give an answer without burying your audience under tons of evidence? ;-)

Kees.


some of it is for cc: to a.f.c. for the archeological references .... however, i thot i had lightened up ... since the actual body of the post didn't actually include the hundred or two referenced URLs/posts.

ref:
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004n.html#39 RS/6000 in Sysplex Environment

Multi-processor timing issue

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Multi-processor timing issue
Date: Thu, 28 Oct 2004 11:56:03 -0700
jmfbahciv@aol.com wrote in message news:<wISdnQGa3ec3aR3cRVn-pQ@rcn.net>...
Now take a look at any system that kept track of runtime, KCS-seconds, and anything else that gave a system owner a handle on reproducible measurements to cross-charge. Those numbers were also used for performance measurements.

KCS is kilo-core-seconds.

/BAH


gnosis was new capability-based, time-sharing service bureau operation developed by tymshare. they had done a lot of work associated with updating various resource utilization measures whenever they crossed a capability boundary. at the time, i guessed that possibly as much as 30 percent of typical execution would involve updating the various resource utilization measures at capability crossing. The objective wasn't so much having an accurate charge against the consumer ... but being able to use the platform to offer various kinds of 3rd party packages ... where there was accurate remittance to the 3rd parties based on what was being charged the users on their behalf (aka there was much more overhead in financial accounting associated with fine-grain capabilities as opposed to enforcing fine-grain capabilities).

one of the transitions from gnosis to keykos was the elimination of all the fine-grain, capability oriented, resource utilization accounting ... and just being able to do really fast, really secure transactions.

misc random, recent gnosis or keykos references
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004e.html#27 NSF interest in Multics security
https://www.garlic.com/~lynn/2004m.html#29 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#49 EAL5

somewhere in boxes i have an old gnosis document

Longest Thread Ever

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: Longest Thread Ever
Date: Fri, 29 Oct 2004 12:10:48 -0700
hawk@slytherin.ds.psu.edu (Dr. Richard E. Hawkins) wrote in message news:<cltk7l$1234$2@f04n12.cac.psu.edu>...
> Far too recent. For Heaven's sake, with a .us domain? For that
> matter, *with* a domain at all . . .
>
> Come to think of it, when were the major domains created? I know that
> it's post-usenet, but that's about all I know.


usenet predates the (1/1/83) cut-over to internetworking, internet protocol, gateways, DNS, etc.

little checking with my RFC index
https://www.garlic.com/~lynn/rfcietff.htm

name server (7/79) but no domain names yet

756
NIC name server - a datagram-based information utility, Feinler E., Mathis J., Pickens J., 1979/07/01 (11pp) (.txt=23491) (Refs 333, 608) (Ref'ed By 953)
...

domain name plan and schedule (date 11/83)

881
Domain names plan and schedule, Postel J., 1983/11/01 (10pp) (.txt=23490) (Updated by 897, 921) (Ref'ed By 897, 920, 921, 1032)

... above says that initially all the domain names will be ".ARPA" but as soon as practical a second domain name of ".DDN" will be added.
...

.. also dated 11/83

882
Domain names: Concepts and facilities, Mockapetris P., 1983/11/01 (31pp) (.txt=79776) (Obsoleted by 1034) (See Also 883) (Refs 742, 768, 793, 805, 810, 811, 812, 819, 821, 830, 870) (Ref'ed By 897, 915, 920, 921, 973, 1001, 1034, 1035, 1101, 1123, 3467)
...

domain name implementation schedule (dated 2/84)

897
Domain name system implementation schedule, Postel J., 1984/02/01 (8pp) (.txt=15683) (Updated by 921) (Updates 881) (Refs 881, 882, 883) (Ref'ed By 915, 920, 921)

Internet turns 35 today

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Internet turns 35 today
Date: Sat, 30 Oct 2004 09:19:47 -0700
Internet Turns 35 Today
http://slashdot.org/articles/04/10/29/1836252.shtml?tid=95

ref:
https://www.garlic.com/~lynn/2004n.html#42 Longest Thread Ever

since there wasn't internet before the 1/1/83 switch-over to internetworking protocol and the internet ... then there also wasn't much of a need to have domain name support (since there was a single domain). one of the characteristics of internet and internetworking is having a non-homogeneous technical &/or business environment (requiring gateways and interconnects between multiple, different domains)

with homogeneous environment and w/o multiple different domains ... there wasn't much of a requirement for multiple domains and domain naming.

misc. other historical references
https://www.garlic.com/~lynn/rfcietf.htm#history

misc. other posts
https://www.garlic.com/~lynn/internet.htm

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sat, 30 Oct 2004 11:56:48 -0600
jmfbahciv writes:
There was that. Another awesome aspect of IBM soft/hardware systems was size. JMF went on a prospective customer call in Hartford, CT and came back in a babbling state because the customer had one file that spanned (I want to say) 100 RP06s. TOPS-10 couldn't touch that with a lightyear pole. 100 disks seems wrong; my niggle wants to type 200 but it was huge.

there were claims in the early 80s of 300 (3330) drive database applications (at various customer sites) .... and in the early '90s, there were issues with applications for mainframe market having to regression test 300 (3380) drive configurations before initial shipment of products to customers.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 31 Oct 2004 16:16:24 -0700
lynn writes:
there were claims in the early 80s of 300 (3330) drive database applications (at various customer sites) .... and in the early '90s, there were issues with applications for mainframe market having to regression test 300 (3380) drive configurations before initial shipment of products to customers.

so mainframe have these bus&tag channel cables ... that are thick and heavy ... somewhat smaller than fire hose. 360s had aggregate limit on single channel run of 200'. multiple device controllers (like disk controllers) could be daisy chained on a single channel cable run that had a maximum aggregate run of 200'

3880 controllers introduced support for 3mbyte/sec transfer and 400' channel cable runs. in the 360 world there was a channel handshake on every byte. data streaming went to multiple byte transfer per handshake ... which increased the max. latency (i.e. 400' max. channel cable run) and 3mbyte/sec transfer.

along the way, some ibm products started to see requirement for some product regression testing with 200-300 drive configurations (like really large dbms operation).

now there was recent mention of 3090 increasing maximum channel configuration to 128 channesl. somewhat simplified ... imagine an area somewhat larger than a football field with four 3090s (each with possibly six smp processor engines). also in this large area are 128 3880 disk controllers. each 3880 has four channel (cable) interfaces gooing to each of the four 3090s. each 3880 might also have eight 3380 disk drives connected (1024 total drives on the space ... all are these large refrigerator sized cabinets).

so there are 128 channel cables coming into each 3090 ... imagine a panel that has 128 ports for nearly fire hose sized connections ... and trying to get 128 fire hoses all connected into a single relatively small connection "panel" physical area ... and there are four of these 3090s ... each 3090 each with 128 channel cable connections to the 128 3880 disk controllers.

the cable weight and just the physical space requriement for trying to connect a large number of such cables into a channel connection panel become major operational issues (as well as just the physical routing of all the cables around the space).

random cisco page discussion mainframe channel attachment issues:
http://www.cisco.com/warp/public

there had been a fiber-optic channel definition laying around in POK since the late '70s. One of the 6000 architecture engineers took the specification and did some tweaks, uped the transfer rate by about 10 percent and using much less expensive optical drivers, full-duplex operation, etc. This was released with the rs/6000 as the serial link adapter (SLA). This is the rs/6000 version of what was released on the IBM mainframe as ESCON (half-duplex) channel.

The enginneer then started working on a 800mbit version of SLA. It took us possibly 4-6 months, but we finally convinced him to work on FCS standard instead. He joined the FCS standards committee and became the FCS document editor.

There was some amount of contention with the mainframe channel people wiorking on FCS standard ... since they were frequently attempting to bias FCS standard in the direction of the half-duplex mainframe channel paradigm. FCS is native full-duplex (or dual simplex) ... and there are actually quite a bit of serialization issues attempting to layer a half-duplex paradigm (on top of a native full-duplex infrastructure).

search engine html version of a share.org pdf report comparing bus&tag, escon, ficon, etc:
http://216.239.41.104/search?q=cache:2m-mv0asoZwJ:www.share.org/proceedings/sh94/data/S3635.PDF+%2B%22bus+%26+tag%22+%2Bchannel+%2BIBM+%2Bshare.org&hl=en

one of the issues on it taking a long time getting around for any escon uptake was that the increased data transfer didn't offer a lot ... the effective rate was limited by the actual device transfers, the controller speed, and the half-duplex overhead handshaking between the channel interfaces and the controllers ... recent posting
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?

a big issue for escon uptake was the enormous problem of managing really large number of bus&tag cables and connectors in the 3090 time-frame ... and the drastic reduction in space & difficulty that fiber-optic connectors represented. ...

and some more topic drift ... lots of ckd disk/dasd postings:
https://www.garlic.com/~lynn/submain.html#dasd

even more drift, a numbers of disk engineering lab & product test lab postings
https://www.garlic.com/~lynn/subtopic.html#disk

and still more drift, some connection between FCS scale-up
https://www.garlic.com/~lynn/95.html#13 SSA
and ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

with additional connection to electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and of course, electronic comerce requires various interaction between networking
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/subnetwork.html#hsdt

gml, sgml, html, xml, etc
https://www.garlic.com/~lynn/submain.html#sgml

and frequently dbms
https://www.garlic.com/~lynn/submain.html#systemr

misc. data streaming posts (increasing the number of bytes tranferred per end-to-end, half-duplex channel hand shake):
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?>
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2002e.html#7 Bus & Tag, possible length/distance?
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?
https://www.garlic.com/~lynn/2002m.html#73 VLSI and "the real world"
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today

misc. sla, fcs, ficon, etc. postings
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#30 Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/2000c.html#22 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#14 FW: RS6000 vs IBM Mainframe
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001e.html#22 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#7 Bus & Tag, possible length/distance?
https://www.garlic.com/~lynn/2002e.html#26 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#31 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#11 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002j.html#78 Future interconnects
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002l.html#13 notwork
https://www.garlic.com/~lynn/2002m.html#20 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#73 VLSI and "the real world"
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2002o.html#11 Home mainframes
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003h.html#3 Calculations involing very large decimals
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004b.html#12 pointless embedded systems
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things

lots of past posts mentioning 3090s:
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/99.html#181 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#37 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#5 TF-1
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#21 S/360 development burnout?
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001b.html#28 So long, comp.arch
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#4 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2001k.html#7 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2001k.html#73 Expanded Storage?
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2001n.html#58 Certificate Authentication Issues in IE and Verisign
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#3 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#32 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#10 IBM Mainframe at home
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#19 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#20 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#26 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#42 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#11 Blade architectures
https://www.garlic.com/~lynn/2002f.html#28 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#35 Security and e-commerce
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#44 ibm icecube -- return of watercooling?
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#63 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
https://www.garlic.com/~lynn/2002l.html#7 What is microcode?
https://www.garlic.com/~lynn/2002l.html#10 What is microcode?
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#0 Handling variable page sizes?
https://www.garlic.com/~lynn/2002m.html#26 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#51 E-mail from the OS-390 ????
https://www.garlic.com/~lynn/2002p.html#12 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2003.html#1 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#34 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#47 send/recv vs. raw RDMA
https://www.garlic.com/~lynn/2003c.html#63 Re : OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003c.html#66 FBA suggestion was Re: "average" DASD Blocksize
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#35 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003g.html#68 IBM zSeries in HPC
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003l.html#56 Offshore IT ... again
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#40 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2003p.html#41 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#9 Dyadic
https://www.garlic.com/~lynn/2004.html#10 Dyadic
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2004e.html#4 Expanded Storage
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#33 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
https://www.garlic.com/~lynn/2004l.html#2 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004l.html#17 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#29 Is Fast Path headed nowhere?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ARP Caching

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ARP Caching
Newsgroups: comp.protocols.tcp-ip
Date: Sun, 31 Oct 2004 16:49:53 -0700
"Tillmann Basien" <tab@basien n0spam.de> writes:
Possibly one of the Gurus can help me.

I have two servers (S1,S2), each with an IP Interface (IP1, IP2). I will setup on S1 an addional IP Interface IP3 (mac is the same as IP1). This interface can be reached by any clients.

Now I remove this interface from S1 and set it up on S2 (now mac of IP3 is same as IP2). IP3 is not reachable. As soon as I clear the arp cache on the client, I can reach this interface again.

Is there a way to inform the Client and the router between by an command from S1 or S2, that the mac address of IP3 hast changed?

I use Solrais and Linux for S1 and S2. I am look for a server side solution.


we encountered a performance enhancement in reno4.3 (long ago and far away) ... the specification is that what is in the arp cache periodic times-out. the reno4.3 tcp/ip code had a special hip-pocket arp value ... it would save the ip address & mac address that it got back from the arp call. the next time in, if the ip address was the same as the previous call ... it would skip the arp cache call and re-use the previously returned mac address and not call arp cache code. if you had a client that constantly used the same ip address over long period of time ... you could find that the corresponding mac address appeared to never go away (even if the basic arp cache code appeared to be correct). example might be client that always talked to the same server ... or always talked to everything thru the same gateway.

the resolution was to send some gratutious ip packet from different ip addresses to force the client code to call arp cache code for a different ip address (and possibly have to do real arp bits).

this was long ago & far away during the early days of ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and managing to do ip-takeover.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 31 Oct 2004 18:19:13 -0700
jmfbahciv writes:
IIRC, he was doing this research in early 80s. He and CDO were writing a TPS (transaction proccessing system) architect spec. Life insurance companies had piles of data and they put it all in one file.

besides the various dbms ... there are also various kinds of (really, really large) bdam and vsam files used for large transaction systems.

about 10 years ago, visited NLM ... and they were still running BDAM implementation that had been done in the late '60s. In the early '80s, the NLM had reached the point were some of the (web) search engines were getting in the late '90s or early 00s; boolean searchs became quite bi-model at around 5-8 boolean search terms ... switching from hundreds of thousands of responses/hits to zero hits. the holy grail was to find search that resulted in more than zero and possibly less than one hundred.

an apple application, *grateful med* was developed in the early 80s to try and deal with the problem, queries were submitted and just the number of hits were obtained (not the actual hits themselves). query strategies were saved ... and the objective was to use the *grateful med* interface to discover a search strategy that resulted in reasonable number of hits. couple grafeful med refs ...
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8955707&dopt=Abstract
http://nnlm.gov/ner/nesl/9707/debate.html
http://www.library.ucla.edu/libraries/biomed/update/mar01/igm.html
http://www.frame-uk.demon.co.uk/guide/grateful_med.htm
http://www-nmcp.med.navy.mil/library/medcomp.asp

artcle/abstracts were indexed by possibly 80 some different categories (title, author, date, subject, keywords, etc). a specific index (like an author's name) would be a record containing list of all the bdam record indexes for the respective abstracts (effectively the electronic equivalent of the card catalogue entry). boolean searches were performed by doing joins and intersections (ORs and ANDs) of the various retrieved bdam record index lists.

some discussion of NLM, retiring the (really old) mainframe NLM, etc
http://www.urologytimes.com/urologytimes/article/articleDetail.jsp?id=3122

following from above:
Broader access to the vast resources of the NLM will make our work easier in some ways. But now, instead of bringing in 20 pages of Medline search results, our patients may come in with 100 pages of printouts from multiple NLM databases! There may need to be a new fee code for dealing with this information overload, perhaps something similar to the "counseling" fee codes that exist in some insurance plans.

other nlm references:
http://www.virtualmed.netfirms.com/internethealth/internetpract0203.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

First single chip 32-bit microprocessor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First single chip 32-bit microprocessor
Newsgroups: alt.folklore.computers
Date: Mon, 01 Nov 2004 06:13:23 -0700
Brian Inglis writes:
The IBM S/360/370 series were never 32 bit clean, as the architected call instruction BALR saved the condition codes in the top 8 bits of the link register, and the return address in the low 24 bits. It was only much later (4300/308x/3090?) that BASR was defined to not save the CC bits in the register, although whether it saved 24 or 32 bit addresses I don't know: neither I nor anyone else I knew used BASR instead of BALR.

note 360/67 had both 24bit and 32bit addressing mode (options) ... and a 32bit BASR. (much) later 3081 (370-xa), introduced 31bit addressing mode (and re-introduced basr).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Alive and kicking?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alive and kicking?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 01 Nov 2004 08:12:22 -0700
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
My web site was comprehensively trounced by an IBM server on Monday, downloading every page at least twice. I'm not sure why , since at least two other IBM systems maintain complete copies. And it cost me - since I pay for the bandwidth from the site. Around 12 Euros. Again.

But the fun bit is the name of the server - "blueice1n1.de.ibm.com".

I haven't heard a peep about "Blue Ice" since I sent in a fairly damning critique after an NDA at Almaden a couple of years back. It lives?


i've been getting regular & frequent hits from various de.ibm.com blueice host names starting mid-2002.

i did archive a posting on my website last week
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?

that contained a url to
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html

... on the other hand ... i'm constantly getting hit from almaden also.

there are sporadic mainframe history postings archived ... recent sample:
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#29 Is Fast Path headed nowhere?
https://www.garlic.com/~lynn/2004n.html#31 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#44 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#48 First single chip 32-bit microprocessor

the archived postings tend to have a fairly dense ratio of URLs to text ... both URLs referencing past posts and past posts having URLs to newer posts that reference them.

and the merged taxonomies and glossaries have extremely dense URLs
https://www.garlic.com/~lynn/index.html#glosnotes

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 01 Nov 2004 14:54:54 -0700
old_systems_guy@yahoo.com (John Mashey) writes:
One more time: *real* computer designers don't build CPUs with tons of extra bits for fun. Sometimes they support less bits than they should, for transient reasons [of which the 24-bit addressing of the S/360s, done for the low-end 360/30] was one of the more unfortunate, although repeated, with less damage, in the MC68000. The PDP-11 designers miss-guessed RAM progress, and the 16-bit limits were really not very fixable. Designers mostly try to anticipate progress and provide enough bits to last a reasonable length of time. As I've noted before, at least in the transition to 64-bits, people were carefully in how they implemented partial TLBs to avoid the S/360/MC68000 problem.

3033 had a hack ... virtual and real addressing was 24bits (16mbytes) ... however the pte, page table entry (16bits) had two unused bits; 12bits for mapping 4096 4k real pages (12+12=24), two flag bits and two unused bits.

3033 added in TLB support, PTE support, and effective real address support for 26bits (64mbytes, 16384 4k pages, although I believe only 32mbyte configurations ever shipped) .... instructions were still limited to 24bit addresses .... but going thru the TLB ... you could come out with a 26bit real address. they were also able to leverage the fact that IDALs (introduced with original 370) already had field for 31bit addresses for i/o transfers.

... problem was that MVS was exploding with real storage requirements ... which needed more real memory ... and attempts were being made to leverage real memory to offset/reduce i/o activity.

also, six 4341s were coming in about the price of a 3033, and you could get 6*16mbyte = 96mbyte aggregate real memory ... about 6*1mip = 6mips aggregate, and 36 aggregate channels ... copared to about 4.5mips, 16mbytes, and 16 channels for 3033.

3081 finally showed up with 370/xa and 31-bit address ... however note that 360/67 (only 360 with virtual memory support as standard) in the 60s had 32-bit addressing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CKD Disks?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 01 Nov 2004 14:57:14 -0700
Julian@ibm-main.lst (Julian Levens) writes:
When I started working on mainframes back in 1987, I was immediately baffled by the need to allocate datasets in advance with various attributes specified up front. The micro and mini computers I had been exposed to before that, had, what I would now know as, a fixed block architecture (FBA). This allowed much simpler management of datasets and file storage, eg copying any dataset could always be achieved with a simple 'copy source dest' style of command, no need to match attributes or worry about diffferent disk architectures.

Nobody at my first employer seemed to know what this was all about, or even piqued their curiosity. My curiosity has never been able to discover a complete understanding of CKD (and EKCD) disks:

1 Their architecture 2 Their record orientation? 3 The apparent, it seems to me, re-formatting of tracks/cyls when allocating room for a dataset 4 What were the perceived (theoretical?) advantages of their design.

Links to documentation, preferably a good overview and/or explanations and comments most welcome.


misc. past postings about ckd disks
https://www.garlic.com/~lynn/submain.html#dasd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CKD Disks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 01 Nov 2004 15:33:11 -0700
bblack@ibm-main.lst (Bruce Black) writes:
Yes, DOS and VM did (and probably still do) support FBA disks, but not the same FBA that is used on PC and Unix systems (and internally in moderm CKD subsystems). The last IBM FBA model for mainframes was called a 3370. I think that EMC subsystems still have an option to emulate them.

FYI, the 3375 (precedesor of the 3380) was essentially the CKD version of the 3370.


note that the vm & cms paradigm has always been logical fixed block architecture ... even when using CKD disks. when 3310s & 3370s showed up, it was a easy mapping of the logic to real FBA. 3375 was ckd emulation on real 3370.

the problem in the OS has been the use of multi-track search for things like vtocs and PDS directories. this could be considered a trade-off between i/o bandwidth and real storage .... the multi-track search allowed finding of records out on disk ... with expensive use of i/o bandwidth ... but little or no real storage (for things like cached indexes). vm & cms keeped indexed and bit-map structures in memory that allowed pointing at records.

the use of ckd multi-track search was reasonable trade-off in the mid-60s with extremely limited real storage and relatively available i/o bandwidth; however at least a little after the mid-70s (slightly more than 10 years later) ... the trade-offs had reversed ... typical memory sizes were increasing dramatically faster than i/o capacity ... and it became optimal to keep memory indexes and totally avoid multi-track searches.

In the late 70s, SJR had a split MVS & VM configuration (two 370s with mvs & vm each having their dedicated processor) with inconnected shared dasd ... however there was a strong operational advisery that MVS packs were *NEVER* mounted on nominal VM dasd strings.

One morning, cms users starting phoning into the datacenter complaining that cms response had gone all to pieces. After some amount of diagnostic, it was determined that a new operator had accidentally mounted a *MVS* pack on a nominal VM dasd string ... and the normal, conventional MVS i/o sequences was having (perceived) disasterous response affects on normal cms response.

The CMS users immediately demanded the MVS pack be removed. The MVS staff declined, claiming it would interrupt some job in progress. The VM staff then loaded a highly-optimized (for running under VM) VS1 operating system ... and placed its packs on MVS dasd strings and turned loose some well chosen multi-track search i/o sequences ... which brought the MVS system to a stand-still (which quickly improved the CMS response). At that point, the MVS staff gladly agreed to move their offending pack if we brought down the virtual VS1.

One of the interesting characteristics is that TSO users have to suffer the terrible response characteristics of MVS systems ... w/o much complaing ... because they've typically seeen how good interactive response can be if you didn't have an MVS system (and various MVS conventional i/o sequences) interferring with your response.

A similar characteristic showed up a large national retailer. They had multiple large MVS systems partitioned by regions ... sharing a common dasd pool. periodically the MVS thruput of all the systems appeared to come to a near stand still. After several months, I was eventually asked to stop in and look at the situation.

When I arrived they took me to a class room where there were half dozen class tables completely covered in foot high stacks of paper that consisted of various performance reports from all the different systems.

To make a long story short ... i eventually eyeballed a seemingly strong correlation that a specific 3330 pack was hitting about 6-7 i/os per second (aggregate across all the performance reports) during periods when system thruput supposedly had died.

Well it turned out this 3330 contained a large application library shared across all the regional mvs systems. It turned out that it had a 3 cylinder pds directory ... and the nominal operation was for every application load, it required a multi-track search of the pds directory to find the application member ... and then an i/o operation to load the member. 3330 had 19 tracks and spun at 3600rpm (60rps). A single multi-track search i/o was taking 19/60 = .317seconds elapsed. Basically there were three application member loads happening per second (aggregate) across all the systems in the complex (and typical regional operation required numerous member loads).

misc. other ckd/dasd posts
https://www.garlic.com/~lynn/submain.html#dasd

in late 70s & early 80s, i was commenting that relative system disk performance had declined by a factor of ten times over a 10-15 year period. the disk division didn't like the sounds of that ... and assigned th division's performance group to refute the statements. after about 3 months they came back with conclusion that I had slightly understated the problem. the report was eventually redone for share as a set of recommendations for dasd considerations for improving system thruput.

slightly related recent post
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks

for more drift ... sort of as a hobby ... i use to stop by the disk engineering and product test labs in bldgs. 14 & 15 and help out on some of their stuff:
https://www.garlic.com/~lynn/subtopic.html#disk

I once tried to talk STL into shipping MVS support for FBA. The reply that I got back that even if i gave them fully integrated and tested software (primarily around multi-track search issues for vtocs & pds directories) ... it still would cost something like $26m to ship as a product. Part of the issue was where was the return on investment (ROI) for that $26m ... since it wasn't likely the customer was going to buy more disks ... he would possibly just substitute FBA disks for CKD disks.

random past references about trying to get FBA support in MVS:
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 01 Nov 2004 18:20:51 -0700
glen herrmannsfeldt writes:
How big was the real address space of the 360/67? (real address bus wires).

Did the 360/67 have IDALs?


largest configuration single processor was 1mbyte real ... a two-processor smp ... gave you 2mbyte real. they mapped 32bit virtual addressing into 24bit real ... didn't need idals for addressing.

the thing with idals was that cp could simulate contiguous virtual i/o into noncontiguous data-chaining ccws. the problem with CCWs is that channel architecture precluded prefetching ccws ... they had to be processed serially. there were potential issues where a single contiguous i/o operation ran into overrun/timing failures if broken up into two (or more) data-chained ccws. idals allowed prefetching ... a single ccw could point at an idal list (of non-contiguous real storage locations) ... and since subsequent idal entries could be prefetched ... it avoiding potential timing problems with non-contiguous i/o transfers.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CKD Disks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 Nov 2004 07:24:53 -0700
"Capomaestro" writes:
Back in STC's (aka STK) salad days (early '80s) when 8350DD were shipping (and failing) the lads in the LSATG (large systems attachment testing group in building 5) coined the term 'maximum surprise technology' to describe the CKD architecture. At the EXCP level the programming would (should) always allocate the maximum track size buffers because the size of the data returned from the channel was unknown when issuing the EXCP. Of course the code could issue a DIAG and perform a lookup on the device but track geometries were changing and CCHHR definitions were soon outdated. This forced ZAPs and code upgrades when new devices were announced. Uggh! Enter VSAM...

... extended sense ... aka E4 to asked the device what it was. note that this was one area where FBA significantly simplified disk drivers evolution over the years ... by abstracting that part of the disk geometry stuff.

so in the 60s (on 360/67) cp/67 had this routine called CCWTRANS that copied and translated (virtual machine) CCWS (and fixed their associated virtual pages at real locations). the issue is that I/O storage references are all real addresses ... and the virtual machine, while thinking it was dealing with real addresses ... was actually dealing with virtual addresses.

the migration of MVT to VS2(SVS) initial prototype (AOS) involved doing a little stub code alongside MVT to utilize virtual memory to make MVT think it was running in a 16mbyte real machine ... and embedding a hacked copy of cp67's CCWTRANS. The issue for EXCP handler was the same as the issue for cp67 dealing with virtual machines ... in this case, the application still thot it was running in real address space, was generating CCWs with addresses and invoking EXCP. The EXCP handler than called the hacked copy of cp67's CCWTRANS to copy the channel program CCWS to a "shadow" channel program CCWs, fix all the related virtual pages in real storage and translate all the virtual addresses to real addresses. The "shadow" channel program (with real addresses) was what was issued on the real channel (not the actual application CCWS ... except in a few special cases involving v=r regions).

The issue persissted in the transition from VS2/SVS to VS2/MVS. the switch from SVS (single virtual storage ... effectively letting MVT think it was running on 16mbyte real machine) to MVS (multiple virtual storage) was giving each application their own virtual address space ... with the kernel mapped to 8mbytes of each address space. However, the issue of the EXCP handler having to call CCWTRANS to copy and translate the CCWS from virtual to real remained.

Now one of the issues ... is if the CCW assumed worst case dasd record length ... then CCWTRANS had to assume the maximum possible transfer and (fetch and) fix all the related virtual pages in real memory

past postings on ckd dasd
https://www.garlic.com/~lynn/submain.html#dasd

recent posting in another thread on the VS2 use of cp67's CCWTRANS
https://www.garlic.com/~lynn/2004n.html#24 Shipwrecks

The other issue in the transition from SVS to MVS was that the MVT pointer passing paradigm persisted and some number of subsystems were also moved into their own address space. In SVS (and MVT), applications could pass pointers to data areas ... that were eventually used by subsystems accessing the address. Having applications and subsystems in their own address space ... resulted in the application program address pointers (for the application program virtual address space) not having any meaning in the subsystem virtual address space.

To address this issue, the common segment was invented ... effectively some parameter passing virtual storage that was mapped into everything virtual address space (in much the same way the 8mbyte kernel was mapped into every virtual address space). By the time of the 3033, there were a number of installations where the common segment was threatening to go over 4mbytes. This result was that out of a each 16mbyte virtual address sapce ... 8mbytes was occupied by the kernel and 4mbytes (or more) was occupied the common segmeent ... leaving 4mbyte or less for actual application execution.

some prior postings discussing common segment
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002m.html#0 Handling variable page sizes?
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect

For the 3033, this was addressed with the introduction of dual-address space. Basically the primary/secondary address space control registers convention. At entry to subsystem, the primary address space control register was set to the subsystem and the secondary address space control register was set to the "calling" application. So rather than needing an ever increasing amount of common segment area ... subsystems could just use instructions to access the secondary address space with the passed application address pointers.

the dual-address space stuff was expanded in 370-xa to access registers (although with 31-bit virtual memory there was less of an issue with common segment area expanding to all available virtual memory leaving none left for application execution).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 02 Nov 2004 08:56:21 -0700
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
That was precisely the claim made for paged virtual memory when it was new and all, and it was rapidly shown to be complete tripe. People fairly soon learnt to live with the programming constraints, but both application programmers and system managers STILL spend a lot of time dealing with unnecessary problems caused by this myth.

A huge sparse address space is the same problem, writ large. I still manage one obscene system where the compiler will not run without memory limits 3 times larger than the physical memory on the machine. So how do I allow users to use the compiler while reducing the frequency of denials of service and crashes because they have got their program's size wrong?


in the 60s ... tss/360 paradigm was this ... part of the issue was that the one level store mapping of standard objects was much larger than real storage ... and there were no hints to the operating system. poorly organized applications could have huge page-trashing characteristics. even partially organized application that tried to maintain reasonable workingset and graceful transition from one phase to the next ... were provided no API to give hints to the kernel ... so all transitions were done as a series of individual page faults.

it was possible to take the os/360 applications (like compilers) that had been done for real memory phased organization and mapped them to CMS ... and I provided a page mapped filesystem support for cms
https://www.garlic.com/~lynn/submain.html#mmap

and the indication of moving from one phase to another could be viewed as running a "window" across the virtual store filesystem. This was an api paradigm that allowed for the efficient transition from one organized phase to the next as a single unit (as opposed to doing it as a large number of single page faults).

a side-by-side comparison of tss/360 and cp67/cms on the same exact hardware running the same workload mix ... showed multi-second trivial response for tss/360 with four concurrent users and sub-second trivial response for cp67/cms with 30+ concurrent users.

the tss/360 analogy is a 4-way multi-treaded cpu core with enormous cache miss rates ... and high level of contention and queueing delays between threads for the (saturated) memory bus for handling misses.

by comparison, the cp67/cms analogy is the same 4-way multi-treaded cpu core ... but the threads ran application organization with much more compact working set use ... and provided with highly efficient method for transition between phases that effectively batch swaps a large number of cache lines.

the truely simplified one-level store paradigm provides for each of development for users doing relatively trivial one-off execution and demos (and/or reasonably smaller than available real memory0. However, if that is extended to the bread & butter applications that are being constantly run ... it can result in extremely poor (and possibly unusable) system thruput.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had developed a cluster analysis package that took input from execution reference traces and application program map ... and attempted to do semi-automated application re-organization to consolidate working set characteristics and improve virtual memory operation. it was released as a product in the mid-70s as vs/repack. It was especially useful in helping applications that had been originally developed for real-memory environments in the transition to virtual memory environment ... for instance it helped in analysing apl\360 memory use charactistics and redoing various critical components in the port to cms\apl
https://www.garlic.com/~lynn/subtopic.html#hone

It was also used internally by some number of other product groups like IMS.

for some topic drift ... the folklore is that when FS was killed
https://www.garlic.com/~lynn/submain.html#futuresys
some number of the people went off to Rochester to do the s/38 which morphed into cisc as/400 and then morphed again into risc as/400.

random passed vs/repack posts:
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, next, index - home