List of Archived Posts

2002 Newsgroup Postings (01/12 - 02/17)

Microcode?
Microcode? (& index searching)
Microcode? (& index searching)
Microcode? (& index searching)
Microcode? (& index searching)
Microcode? (& index searching)
Microcode?
Microcode? (& index searching)
Microcode? (& index searching)
hollow files in unix filesystems?
Microcode? (& index searching)
Infiniband's impact was Re: Intel's 64-bit strategy
Infiniband's impact was Re: Intel's 64-bit strategy
Infiniband's impact was Re: Intel's 64-bit strategy
Infiniband's impact was Re: Intel's 64-bit strategy
hollow files in unix filesystems?
index searching
Infiniband's impact was Re: Intel's 64-bit strategy
hollow files in unix filesystems?
index searching
AOL buys Redhat and ... (link to article on eweek)
Infiniband's impact was Re: Intel's 64-bit strategy
Infiniband's impact was Re: Intel's 64-bit strategy
Infiniband's impact was Re: Intel's 64-bit strategy
Question about root CA authorities
IBM SHRINKS by 10 percent
IBM SHRINKS by 10 percent
First DESKTOP Unix Box?
windows XP and HAL: The CP/M way still works in 2002
windows XP and HAL: The CP/M way still works in 2002
bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
First DESKTOP Unix Box?
Does it support "Journaling"?
Does it support "Journaling"?
bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
windows XP and HAL: The CP/M way still works in 2002
Poor Man's clustering idea
"war-dialing" etymology?
IBM 5100 [Was: First DESKTOP Unix Box?]
Poor Man's clustering idea
"war-dialing" etymology?
Infiniband's impact was Re: Intel's 64-bit strategy
IBM 5100 [Was: First DESKTOP Unix Box?]
PDP-10 Archive migration plan
IBM 5100 [Was: First DESKTOP Unix Box?]
... the need for a Museum of Computer Software
IBM 5100 [Was: First DESKTOP Unix Box?]
... the need for a Museum of Computer Software
Grieving and loss was Re: Infiniband's impact was Re: Intel's 64-bit strategy
Wylbur?
"Have to make your bones" mentality
... the need for a Museum of Computer Software
Computer Naming Conventions
Computer Naming Conventions
"Fair Share" scheduling
Computer Naming Conventions
Computer Naming Conventions
ibm vnet : Computer Naming Conventions
Computer Naming Conventions
Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))
Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))
TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
Filesystem namespaces (was Re: Serving non-MS-word .doc files (was Re: PDP-10 Archive migrationplan))
... the need for a Museum of Computer Software

Microcode?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sun, 13 Jan 2002 06:53:12 GMT
Anne & Lynn Wheeler writes:
chip technology. A 3031 was faster than a 158 in large part because the engine wasn't being shared between 370 and channel functions (there were two dedicated engines one for each function)

a little comparison

                      158             3031              4341

Rain              45.64 secs       37.03 secs         36.21 secs
Rain4             43.90 secs       36.61 secs         36.13 secs

also times approx;
145                168-3              91
145 secs.          9.1 secs          6.77 secs

rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in 35.77 secs.

random refs:
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002.html#51 Microcode?
https://www.garlic.com/~lynn/2002.html#52 Microcode?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.arch,comp.lang.asm370
Date: Sun, 13 Jan 2002 15:30:21 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
And Rotational Position Sensing (RPS) for 3330 disks. This was written up in a Systems Journal, (which is lost or buried in my library), as a major step to reducing the I/O bottleneck in days of yore, when 2314's still ruled. An -85 with 2 Mb core running MVT in a service bureau would have been severely constipated without that feature.

under heavy load, RPS wasn't totally "free" ... there was some degradation because of RPS-miss ... i.e. multiple concurrent operations ... and some other disk was transferring data (connected) at the moment an RPS connection was attempted ... and the disk would have to do an extra complete revolution and try again.

in slightly earlier thread (index searching) here in a.f.c. the memory/IO trade-off of CKD (count-key-data) architecture was discussed. Basically, the I/O subsystem could be instructed to search for a record of particular characteristic. This operation saved a lot of memory ... at the extreme expense of I/O subsystem resource.

RPS was a feature introduced with 3330 disks (and block multiplexor channel achitecture) that tried to alleviate a little of the problem. Basically a disk drive was configured with a extra platter and head that had rotational positioning information. Effectively a new channel command was introduced that outboarded an operation in the disk drive which would suspend I/O channel connection until a specific rotational position arrived. The 3330 had 20 surfaces and 20 heads, 19 for data, and a 20th that contained the positioning information. Typically a "set sector" operation (susped channel connection and search for position) was positioned just prior to a "search" command aka
seek > set-sector search tic -8 read/write

ref:
https://www.garlic.com/~lynn/2002.html#10 index searching

If the operating system "knew" the approximate rotational position ("sector") of the desired record ... it could reduce the amount of I/O hardware busy time spent in "searching" unnecessary records.

The problem was that if the I/O hardware was busy with other operations at the moment the desired sector came aound ... there would be a "RPS-miss" and the device would have to make a complete revolution and try again.

random refs:
https://www.garlic.com/~lynn/2002.html#0 index searching
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#15 index searching
https://www.garlic.com/~lynn/2002.html#16 index searching
https://www.garlic.com/~lynn/2002.html#17 index searching
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002.html#31 index searching

The IBM channel cables are a half duplex (parallel copper) I/O configuration that had a synchronous end-to-end hand-shake on every byte. The standard "high-speed" selector channel was rated at max 1.5mbytes/sec and length resriction of about 200' (aggregate length, typically a number of hardware boxes were "daisy-chained" on the same "channel"). The "block multiplexor" channel introduced support for "set-sector" (allowed higher number of concurrent operations using the channel), max 3mbyte/sec transfer and increased the aggregate length to about 400' (the increase in max data-rate and aggregate length was due to a new internal channel cable hand-shaking protocol that would transfer 8bytes of data in a single hand-shake ... rather than a single byte), although there wasn't actually a 3mbyte transfer device until the 3380 ... misc. ref:
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives

In the following posting discussing a little run-in that I had with some disk division "architects" ... over my rough swag that disk relative system performance had degraded by a factor of ten times over 15-year period.
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)

after a detailed study, they finally concluded that I had underestimated the degradation by not taking into account RPS-miss in the rough swag.

misc RPS-miss refs:
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)

random other disk & channel refs:
https://www.garlic.com/~lynn/subtopic.html#disk
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.lang.asm370
Date: Sun, 13 Jan 2002 17:57:45 GMT
Anne & Lynn Wheeler writes:
transfer 8bytes of data in a single hand-shake ... rather than a single byte), although there wasn't actually a 3mbyte transfer device until the 3380 ... misc. ref:
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives


to go along with the new 3380 disk drive & 3mbyte/sec transfer was the 3880 disk controller replacing the 3830 disk controller (used with 3330s)

while the 3880 disk controller handled the 3mbyte/sec transfer it was actually significantly slower than the 3830. The 3830 was a horizontal m'code engine. The 3880 was a jib-prime, a vertical m'coded engine (similar to standard minicomputer) with hardware bypass/assist for data transfer (but control functions and commands were hanlded by the the m'code).

The initial product acceptance criteria for 3880 was that it had to be no worse than 5percent slower than the 3830 disk controller. The initial acceptance test was performed using a two disk VS1 configuration. The problem with this was that there was little or no concurrent disk activity.

I had worked on a special bullet proof IO supervisor for the disk engineering and product test lab (standard MVS system had 15 minute MTBF when operating with single test cell).

One monday morning, I got a call from the product test lab (bldg. 15) saying that the operating system performance had just gone totally south (and there were no hardware changes). After some investigation, it turned out that somebody had replaced a 3830 controller with 16 3330 drives over the weekend with a 3880 controller (with the availability of doing work under an operating system ... the product test lab was running an internal time-sharing service on a 3033 test machine in addition to all the product test work).

After a lot more investigation, it turned out that the 3880 m'coders in order to make performance criteria had fudged things a little. At the end of an operation, there was some amount of controller work that needed to be done. With the 3830 controller, this work was totally completed before the conttroller signalged the channel (and processor) end of operation. The 3880 was sufficiently slower that waiting until all the cammand termination work was completed before signaling the channel resutled in the 3880 missing the performance acceptance criteria. To compensate, they fudged things a little and signaled the channel/processor that the operation had end before all of the control business was complete.

Now, on a lightly loaded system, the processor/kernel would take the interrupt and go off and do a bunch of work before initiating another I/O operation. On a system with a heavier load, there are typically queued requests for disk drives ... so that when one operation completes, the kernel will immediately redrive the device with a queued requests. What was happening with the 3880 under heavy load, was the redrives were being rejected with "control unit busy" status; the kernel then had to requeue the operation and wait until the control unit signaled control unit available. This not only significantly reduced the disk I/O thrput (compared to 3830) but also significantly increased the kernel processor busy/pathlengths (effectively doubled the number of I/O initiation and I/O interrupts).

Fortunately this was six months before 3880s were shipped to customers so they had time to work out other mechanisms to compensate for the jib-prime being so slow

random refs:
https://www.garlic.com/~lynn/96.html#18 IBM 4381 (finger-check)
https://www.garlic.com/~lynn/96.html#19 IBM 4381 (finger-check)
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#54 VLIW at IBM Research
https://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#49 VTOC position
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.lang.asm370
Date: Sun, 13 Jan 2002 18:30:51 GMT
Anne & Lynn Wheeler writes:
Fortunately this was six months before 3880s were shipped to customers so they had time to work out other mechanisms to compensate for the jib-prime being so slow

The other side-effect (going from 3830 to 3880) was that channel busy was noticably increased (even after fixes) for channel handshaking opertaions

The channel busy increase was so significant that the 3090 was revised to ship with more total channels (than originally planned) in order to meet various aggregate I/O system thru-put criteria. The increase in the number of 3090 channels then resulted in needing an additional 3090 TCM ... which noticeably increased the 3090 manufacturing costs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.arch,comp.lang.asm370
Date: Sun, 13 Jan 2002 19:42:54 GMT
Bill B writes:
The knee of the curve was at 35% channel utilization. A classic mu=rho/(1-rho) plot. This is where degradation started to escalate and any subsequent I/O request was highly likely to be queued and redriven. It is similar in concept to ethernet's ability to rarely get over 50% utilization because of the collisions. (Doing FDR dumps doesn't count, they typically drove channels to 90%, but there was usually no contention for the drive.)

slight aside with respect to e'net ... we had to fight (both the 16mbit t/r as well as saa people) this when we where doing 3-tier architecture stuff ... vis-a-vis claims regarding 16mbit t/r. At least some of the "models" used by the 16mbit t/r people was original 3mbit/sec thick=net that didn't do listen before transmit.

typical 10mbit (with standard listen before transmit) with thin-net in similar star-wiring configuration (even using t/r CAT4 plant wiring) in local lan configuration did quite well. Part of the reason was that worst case in star configuration was the propagation delay of the two longest "arms" ... as opposed to aggregate length of a thicknet daisy chain.

In any case, a 30-node network with all machines in a solic low-level device driver loop transmitting minimum sized enet packs would see 85 percent effective thruput of media bandwidth (aka 8.5mbit) initial results i saw were in 88? or 89? sigcomm ... same issue that had the paper showing slow-start was non-stable.

turned out that we found that typical 10mbit/sec enet configurations were getting higher efective thruput than typical 16mbit/sec t/r configurations.

random refs:
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#50 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/99.html#124 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/99.html#201 Middleware - where did that come from?
https://www.garlic.com/~lynn/99.html#202 Middleware - where did that come from?
https://www.garlic.com/~lynn/2000b.html#45 OSA-Express Gigabit Ethernet card planning
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#70 Maximum Length of an URL
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#2 IBM's "ASCI White" and "Big Blue" architecture?
https://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#45 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001.html#3 First video terminal?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001.html#16 Sv: IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#22 Title Inflation
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#19 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#19 mainframe question
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#62 ASR33/35 Controls
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#10 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#55 9-track tapes (by the armful)
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers
Date: Sun, 13 Jan 2002 19:52:57 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
TCM = Thermal Conduction Module, right? (They were written up in the <mumble> edition of Scientific American.

Although the details of construction are dim in my memory, they had complexity approaching LSI to VLSI, hence the increased cost comment.


basically (lots of) chips inside custom heat transfer housing.

some past refs:
https://www.garlic.com/~lynn/2001k.html#7 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2000b.html#36 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#37 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000d.html#61 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000d.html#64 "all-out" vs less aggressive designs

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: alt.folklore.computers
Date: Mon, 14 Jan 2002 03:42:13 GMT
"Phil Weldon" writes:
Virtual memory? I worked at an IBM Field Systems Service Center which had one or more of each of the common business machines of the late sixties, all in one computer room, along most of the possible peripherals. The System 360/ 15, 20,25,30,40, and 50. I never saw a System 360 44, but my guess of Virtual Memory is based on the RCA model names for their System 360 compatible computers, the Spectra 70 line. RCA picked a number between each System 360 number as a marketing ploy. They were not very successful, and I only know of three models: the Spectra 70/ 35, 45, and 55 plus a 46 or 47 I'm not sure which.) The Spectra 70/ 46 or 47 was a virtual memory machine with 256 Kbytes main memory and an 8 Mbyte, head per track, drum for paging. So on that thin thread I suggest the System 360/44 was a virtual memory machine.

Phil Weldon, pweldon@mindspring.com


from:
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm

360/44 - the oddest model. It could be described as a 40 with a hardware floating point processor and faster memory. Had a variable precision floating point unit that could operate on 4, 5, 6, 7, and 8 byte operands. A rotary switch on the front panel could select between 2 different floating point formats. It had only 1/2 word and 1 word instructions and could therefore use a one word memory width without any speed penalty. Due to the odd instruction set, it had its own operating system, PS/44.

=================================

cambridge modified a standard 360/40 and added virtual memory to it and developed cp/40 on it. Later when "official" virtual memory 360/67 became available ... CP/40 was ported to the 67 for CP/67

from melinda's paper at
https://www.leeandmelindavarian.com/Melinda/
https://www.leeandmelindavarian.com/Melinda#VMHist

In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, ''time-sharing'' meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project. The official objectives of the CP-40 Project were the following:

1. The development of means for obtaining data on the operational characteristics of both systems and application programs; 2. The analysis of this data with a view toward more efficient machine structures and programming techniques, particularly for use in interactive systems; 3. The provision of a multiple-console computer system for the Center's computing requirements; and 4. The investigation of the use of associative memories in the control of multi-user systems.

The project's real purpose was to build a time-sharing system, but the other objectives were genuine, too, and they were always emphasized in order to disguise the project's ''counter-strategic'' aspects. Rasmussen consistently portrayed CP-40 as a research project to ''help the troops in Poughkeepsie'' by studying the behavior of programs and systems in a virtual memory environment. In fact, for some members of the CP-40 team, this was the most interesting part of the project, because they were concerned about the unknowns in the path IBM was taking. TSS was to be a virtual memory system, but not much was really known about virtual memory systems. Les Comeau has written: Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred. The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle. The modifications to the 360/40 would prove to be quite successful, but it would be more than a year before they were complete.

The Center actually wanted a 360/50, but all the Model 50s that IBM was producing were needed for the Federal Aviation Administration's new air traffic control system.

One of the fun memories of the CP-40 Project was getting involved in debugging the 360/40 microcode, which had been modified not only to add special codes to handle the associative memory, but also had additional microcode steps added in each instruction decoding to ensure that the page(s) required for the operation's successful completion were in memory (otherwise generating a page fault). The microcode of the 360/40 comprised stacks of IBM punch card-sized Mylar sheets with embedded wiring. Selected wires were ''punched'' to indicate 1's or 0's. Midnight corrections were made by removing the appropriate stack, finding the sheet corresponding to the word that needed modification, and ''patching'' it by punching a new hole or by ''duping'' it on a modified keypunch with the corrections.


==============================

random cp/40 references:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/94.html#37 SIE instruction (S/390)
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/99.html#139 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#142 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#81 Ux's good points.
https://www.garlic.com/~lynn/2000.html#82 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#42 Domainatrix - the final word
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000e.html#16 First OS with 'User' concept?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#46 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.arch,comp.lang.asm370
Date: Mon, 14 Jan 2002 12:36:49 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
I thought the 2305 was 3MB/s. Mostly used for paging, transfer rate was pretty important.

2305s were limited capacity ... and were 3mb/sec (I short cut and was referring to majority of of disk activity ... and lots of configuration that didn't have 2305s). the big issue for paging was more latency ... fixed head per track ... than paging rate. The 2301s did pretty good at 1.5mbyte/sec. table of device comparisons
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives
https://www.garlic.com/~lynn/99.html#6 3330 Disk Drives
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?

table of system comparisons
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers,comp.arch,comp.lang.asm370
Date: Mon, 14 Jan 2002 12:41:40 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
A commonly discussed case is a TCP stream between two fast machines. (Fast relative to when 10Mb ethernet came out.) Such machines can get the ACK packets out fast enough to signal a collision with the next data packet coming in. Numbers like 90% or 95% are seen.

turned out that we found that typical 10mbit/sec enet configurations were getting higher efective thruput than typical 16mbit/sec t/r configurations.

That is the ethernet newsgroup consensus, too. Though the TR people then like to argue that TR is more deterministic.


the sigcomm report table showed 95% for nominal conditions ... but graph/table showed effective thruput degrading to worst case of 85% effective thruput of media bandwidth ... with 30 machines all in tight loop constantly transmitting minimum sized packets (which would apply to ACKs also). The worst case scenario would be situation where there would be highest probability of multiple stations attempting to transmit simultaneously when current packet transmission ended (highest probability that two or more stations listening for the current end of transmission and were to immediately start simultaneous transmission resulting in collisions).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hollow files in unix filesystems?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hollow files in unix filesystems?
Newsgroups: alt.folklore.computers
Date: Fri, 18 Jan 2002 00:56:26 GMT
Steve O'Hara-Smith writes:
As you see this will not work in UNIX, in fact delaying action as long as possible is a common UNIX technique. On many modern UNIX systems even malloc calls (which allocate memory from the system to a process dynamically) are rigged to always succeed and not actually allocate the memory at all until an attempt to access it causes a fault. They call this technique overcommit, I call it a pain in the arse. It is possible to allocate more memory to a process than the system has virtual memory and nothing will go wrong unless you actually use it. AIUI this technique was introduced to protect the system against applications that requested vast chunks of memory 'just in case', to me it makes debugging harder than it need be.

there are two possible ways of treating this ... which is starting to make more & more difference with large real storage machines. I've referred to the algorithm in the past as "dup" & "no-dup" (i.e. dup .... for duplicate).

basically, backing disk page slots can be allocated for every possible virtual memory page. In a "dup" (aka duplicate) implementation ... if you have 1gbyte real storage and 2gbyte swap/page file .... then it typically allows up to 2gbyte of virtual memory.

In a no-dup (no duplicate) implementation .... any time a page is in real memory, there is no corresponding slot reserved on disk. Anytime a page is brought into real memory from disk ... the corresponding disk slot is made available (aka a virtual memory page only exists in one unique place at a time ... either on disk or in real storage ... aka no-duplicate). In the 1gbyte real storage plust 2gbyte swap/page file, the total amount of virtual memory pages can approach 3gbytes.

Some unix systems have implemented a part-way fudge on this .... the first time a page is created in real stoarge ... there is no immediate disk slot allocation ... until the page actually gets written out (aka lazy allocation). This looks a little like a no-dup implementation on the initial (lazy) allocation ... but frequently then switches to a "duplicate" implementation once the initial allocation has been made (i.e. any subsequent time the page is brought in from disk to real storage, the corresponding disk slot isn't automatically released and made available).

It is possible to also build an implementation that dynamically switches between "dup" & "no-dup" based on virtual memory exceeding disk space. A "dup" alogrithm has the advantage that if a virtual memory page has been selected for replacement and has not been changed during its current stay in real storage ... then the real storage copy and the disk duplicate are still identical and it isn't necessary to perform the write to disk. A "no-dup" implementation never has a duplicate saved on disk so all pages selected fro replacement have to be written to disk.

misc. page replacement postings:
https://www.garlic.com/~lynn/subtopic.html#wsclock

random dup/no-dup postings:
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode? (& index searching)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode? (& index searching)
Newsgroups: alt.folklore.computers
Date: Fri, 18 Jan 2002 01:17:51 GMT
nospam@nowhere.com (Steve Myers) writes:
Actually, I think there were 2 2305s: one model that transferred at at 3 meg / second, and another model with twice the storage that tramsferred at 1 1/2 meg / second.

there was 2301 and 2303 on the 360 ... a fixed head drum

the 2301 was a 2303 that read/wrote data from 4 heads in parallel ... and got (four times transfer rate of 2303, 1/4th the number of "tracks", and each track four times the capacity)) at less than 1.5mbytes/sec transfer

there were two 2305s ... both at 3mbyte/sec.

one had half the capacity and half the rotational delay .... aka half the disk heads were off-set 180 degrees so that either head could start read/write.

in the following comparison from
https://www.garlic.com/~lynn/95.html#10


machine         360/67  3081K   change
mips            .3      14      47*
pageable pages  105     7000    66*
users           80      320     4*
channels        6       24      4*
drums           12meg   72meg   6*
page I/O        150     600     4*
user I/O        100     300     3*
disk arms       45      32      4*?perform.
bytes/arm       29meg   630meg  23*
avg. arm access 60mill  16mill  3.7*
transfer rate   .3meg   3meg    10*
total data      1.2gig  20.1gig 18*

Comparison of 3.1L 67 and HPO 3081k

====================================

the '67 configuration had three 2301s (4mbyte each) on one channel. Peak capacity was 300pages/sec ... 1.2mbyte/sec (avg. load was 50 percent of peak)

the 3081k configuration had six 12mbyte 2305s split 3 & 3 on two different channels. Peak thruput per channel was about 600 pages/sec (2.4mbyte/sec) for a total of 1200 pages/sec across two channels. (4.8mbyte/sec). peak thruput was less than 3mbyte/sec in part because of record layout on the track ... there were gaps in the spacing (& therefor transfer) as the track rotated. avg. loading tended to be 50 percent of peak (paging was frequently very bursty).

the other 2305 had about 6mbyte capacity (1/2 the capacity) but it also had 1/2 the latency because of the track offset (the two devices revolved at the same speed ... but rather than avg. being 1/2 revolution to come under a head, it only need 1/4 revolution on the avg. to come under a head). I never saw or heard of anybody that had one of the "low latency" 2305s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 00:36:47 GMT
"Bill Todd" writes:
I don't think they're at all the same. In Brook's case, the typical issues are difficulty and/or added overhead in parallelizing the problem, neither of which applies to the overtime case.

Indeed, if you can solve 50% of the problem in 8 hours you can solve 100% in 16 hours - as long as they're comparable in quality. The problem with overtime involves the decrease in per-hour work quality typically associated with increasing the number of hours worked in a given time period (above some lower limit where set-up overheads can be ignored) - and this can vary a great deal among individuals (and with their motivation).


i always thot the overtime hrs were the most productive .... the standard 8 hr day was consumed with phone calls, meetings, discussions, etc. It wasn't until everybody left and went home that any real work got done. The case could be made that 10% (or less) of the problem is solved in 8hrs ... and after that is when the real work and the other 90+ percent gets done.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 01:04:54 GMT
Anne & Lynn Wheeler writes:
i always thot the overtime hrs were the most productive .... the standard 8 hr day was consumed with phone calls, meetings, discussions, etc. It wasn't until everybody left and went home that any real work got done. The case could be made that 10% (or less) of the problem is solved in 8hrs ... and after that is when the real work and the other 90+ percent gets done.

when I was an undergraduate, I got a key to the computing center machine room and would have it all to myself from 8am sat. until 8am monday. Initially when I started they had a 709 and a 360/30 (that spent a lot of time in 1401-mode running MPIO, ur<->tape front-end for the 709) and later was replaced with 360/67. I always thot I was extremely productive in those 48 hrs ... nobody to bother or distract me. It was the monday classes that were a little problem ... not having slept since friday night.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 02:33:41 GMT
Bruce Hoult writes:
No, that makes the case that the eight hours that you work should have only minimal overlap with everyone else's eight hours.

problem is that many believe that is the "work" ... going to meetings, taking phone calls, etc. .... in which case, they insist on the overlap.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 02:40:27 GMT
Anne & Lynn Wheeler writes:
when I was an undergraduate, I got a key to the computing center machine room and would have it all to myself from 8am sat. until 8am monday. Initially when I started they had a 709 and a 360/30 (that spent a lot of time in 1401-mode running MPIO, ur<->tape front-end for the 709) and later was replaced with 360/67. I always thot I was extremely productive in those 48 hrs ... nobody to bother or distract me. It was the monday classes that were a little problem ... not having slept since friday night.

old postings to three shift work day (1st shift in bldg. 28, 2nd shift in bldg 14&15, and 3rd shift in bldg. 90)
https://www.garlic.com/~lynn/2001e.html#64 Design (Was Re: Server found behind drywall)
https://www.garlic.com/~lynn/2001h.html#29 checking some myths.
https://www.garlic.com/~lynn/2002.html#10 index searching

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hollow files in unix filesystems?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hollow files in unix filesystems?
Newsgroups: alt.folklore.computers
Date: Sat, 19 Jan 2002 09:44:26 GMT
Juha Laiho writes:
Hmm.. so, in this case, the disk space acts as a kind of cache? If there's room on the paging space, also pages that are active in the memory will reside on disk, and if space becomes short, the on-disk copies of active pages can be reused -- and also, if the on-disk versions are not stale, the active memory page can simply be freed for use?

in the duplicate case ... the disk "cache" contains copies of everything that is also in real storage (modulo some of the lazy allocation algorithms that don't allocate disk space until the first page-out operation after initial creations). If the most recent copy of a page in memory has not been modified ... then the memory and disk copies are the same (i.e. disk copies are not stale) and the memory copy does not need to be written when selected for replacement. Lots of program executables would regularly fall into this category.

in the no-duplicate case ... say because wanting to conserve disk space, the implementation immediately releases/frees the disk slot anytime a page is brought into memory. there is no longer duplicate pages in memory and on disk; so anytime a page in memory has been selected for replacement, it always has to be written (to a new disk slot). From the implementation stand-point, the no-duplicate case is a much more general implementation of some of the lazy allocation implementations (which don't actually allocate a disk slot until a page is selected for replacement and must be written out; which is exactly what the no-duplicate implementation has to do ... but also releases disk slots when pages are read into memory).

A specific implementation could also dynamically switch back & forth between the "no-dup" and the "dup" operation ... say because of reaching some disk space constraint (dynamically deciding about releasing disk slots when pages are brought into memory).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 18:35:22 GMT
Brian Inglis writes:
VM/HPO recommended that only the first 9 of 10 4K pages per track on 3380 systems be used for multiple page reads/writes, to eliminate that problem. Another classic time/space tradeoff most people made happily.

there was a lot of issue with regard being able to switch heads between platters within a track ... not at just end-of-track. The controller and channel had latency issues being able to fetch necessary commands and parameters from memory and execute the necessary commands while the platter is spinning.

On 3330s there was enuf room to format a short dummy block of 110 bytes between each 4k record. For some machines and some controllers and some disks ... the 110-byte filler block was sufficient for all the switch operations to be performed before the next data block was under the head (but on numerous configurations the latency was too long and the start of the record had rotated under the head which resulted in having to make a complete revolution).

On 3380 there wasn't enuf room to put dummy block between every one of the 9 4k pages in order to induce delay before the next record had come under the head prior to finish head switch operations. So 3380 had more complex operations ... it had a dummy block large enuf for the head switch operation inserted between every three 4k records, i.e.

4kblk, 4kblk, 4kblk, dummy blk, 4kblk, 4kblk, 4kblk, dummy blk, 4kblk, 4kblk, 4kblk

was formated on each track ... and HPO could perform a head switch either because it wasn't attempted to do it between i/os for adjacent blocks or when there was a dummy block between i/os for adjacent blocks.

random refs;
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 19:57:49 GMT
CBFalconer writes:
Never could keep that resolve. Sooner or later I just had to point out some of the imbecilities that came up, especially if they would impact me.

i got blamed for doing it on much grander scale.
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hollow files in unix filesystems?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hollow files in unix filesystems?
Newsgroups: alt.folklore.computers
Date: Sat, 19 Jan 2002 21:16:47 GMT
Anne & Lynn Wheeler writes:
slot). From the implementation stand-point, the no-duplicate case is a much more general implementation of some of the lazy allocation implementations (which don't actually allocate a disk slot until a page is selected for replacement and must be written out; which is exactly what the no-duplicate implementation has to do ... but also releases disk slots when pages are read into memory).

i also noticed a number of unixes in the early '90s having an adverse operation interaction where they had implemented lazy allocate but not no-dup; basically a daemon or something would get started on run completely out of memory for some period of time. At some undetermined point in the future (possibly an hour) something would happen to select the daemon pages for (initial) page out ... requiring disk slot allocation. At this time, it would be discovered that there were no more available disk slots and the deamon would be aborted.

If the lazy allocation had been implemented as part of a full no-dup policy, then (modulo a couple reserved memory pages held for doing worst case scenario for page exchange between memory and disk) if the page could be created ... then there would be a disk slot.

The page out was being required because

1) new page was being created or 2) request to bring a page in from disk.

If it is #2 and a "no-dup" implementation, then as soon as a page is in from disk ... the slot can be made available for the page to go out.

If it is #1 and there is no disk slots, abort the creation of the new page (and the corresponding task).

There is slight advantage to aborted tasks that are in the process of being created as opposed to aborted tasks that have been around for a long time (at least at the micro-level). That doesn't say that there might not be a more global policy that looks at importance of tasks for ranking as to selecting tasks to abort in order to make space available.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 19 Jan 2002 21:46:06 GMT
Brian Inglis writes:
VM/HPO recommended that only the first 9 of 10 4K pages per track on 3380 systems be used for multiple page reads/writes, to eliminate that problem. Another classic time/space tradeoff most people made happily.

hpo had a later special implementation that did use 10 4k pages per track called "big pages". As an attempt to address the relative disk "system" performance degradation (i.e. rest of system resources increasing much faster than disks were getting faster), an attempt was made to do larger transfer per disk I/O i.e.
https://www.garlic.com/~lynn/95.html#10

system          3.1L            HPO     change
machine         360/67          3081K

mips            .3              14      47*
pageable pages  105             7000    66*
users           80              320     4*
channels        6               24      4*
drums           12meg           72meg   6*
page I/O        150             600     4*
user I/O        100             300     3*
disk arms       45              32      4*?perform.
bytes/arm       29meg           630meg  23*
avg. arm access 60mill          16mill  3.7*
transfer rate   .3meg           3meg    10*
total data      1.2gig          20.1gig 18*

aka disk transfer speed had increase much faster than disk access times. "Big pages" guaranteed that page operations were done ten (i.e. full track) at a time.

On page out, ten virtual pages for the same address space were found and scheduled as a single page-out to a full-track area as a single full-track write. Then if any virtual page member of a "big page" later had a fault and needed to be brought in, all ten pages on the track were fetched in a full track i/o operation. Members of a specific "big page" could change over time depending on whether they were all viewed as having been referenced during same stay in memory.

Since the members of big pages would change over time ... by definition a full-track big page on disk became stale (it could be different every time it was written out) and so effectively a "no-dup" implementation was used ... recent dup/no-dup implementation discussion
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?

The big page also implemented a moving cursor write algorithm (similar to some of the log structured filesystems) ... it was desirable to have contiguous space on disk that was five to tens time larger than needed. As the arm moved across the space, multiple big page writes tended to be queued up and they would try and write to all the same cylinder w/o having to move the arm. If the contiguous space became too full, then the probability increased that at any specific arm position there was already allocated big pages on disk. The big page cursor sweep write algorithm like to have lots of available space for doing multiple write operations w/o having to move the arm.

There is one final optimization that I don't believe that the big-page implementation used ... which is akin to the immedate full-track transfer of some of the disk caches. Since you know that you are going to perform a full-track operation ... it would be desirable to start data transfer as soon as the head settled ... w/o having to wait for rotation for a sepcific starting record. Some of the full-track implementations would key each record not with simple CCHHR ... but with an "id" as to what virtual page was located their. The starting CKD search operation would accept any key value (as opposed to search for a specific key value) ... and then it would do a chain-data read for count, key & data. The keys would go to one set of buffers and the (ten) virtual pages would go into normal memory slots. After the operation was complete, the keys would be examined to determine which record was read first and then update all the tables accordingly.

some past big page discussions:
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#19 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#49 VTOC position
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2001l.html#55 mainframe question

some disk comparison discussions
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives
https://www.garlic.com/~lynn/99.html#6 3330 Disk Drives
https://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
https://www.garlic.com/~lynn/2001l.html#41 mainframe question
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?


                2305    2314    3310    3330    3350    3370    3380
data
cap, mb         11.2    29      64      200     317     285     630
avg. arm
acc, ms         0       60      27      30      25      20      16
avg. rot
del. ms         5       12.5    9.6     8.4     8.4     10.1    8.3
data
rate mb         1.5     .3      1       .8      1.2     1.8     3
4k blk
acc, ms         7.67    85.8    40.6    43.4    36.7    32.3    25.6
4k acc.
per sec         130     11.6    24.6    23      27      31      39
40k acc
per sec         31.6    4.9     13.     11.3    15.     19.1    26.6
4k acc
per sec
per meg         11.6    .4      .38     .11     .08     .11     .06

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AOL buys Redhat and ... (link to article on eweek)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AOL buys Redhat and ... (link to article on eweek)
Newsgroups: linux.redhat
Date: Tue, 22 Jan 2002 23:06:17 GMT
"Kevin Morenski" writes:
We cannot forget the Coca Cola or coffee. In my case, Coca Cola AND Coffee

i was at a conference ... where they brought in a truckload of jolt and used the conference as beta test for the product .... possibly the same year that 60 minutes did a (real) number on the conference (my wife calls it greybeards up in the santa cruz mountains).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 22 Jan 2002 23:20:17 GMT
"del cecchi" writes:
Some of us are less willing to put in massive amounts of free overtime to remedy poor planning by management. "A lack of planning on your part does not constitute an emergency on my part."

i'm not sure i ever put any free overtime because of direct poor planning by management. I put in a lot of free overtime because of possibly poor planning by the infrastructure ... but it was on projects that I wanted to do ... not management. It did complicate matters if I had con'ed random other organizations into letting me work on their problems ... things that my management had nothing at all to do with (you typically don't get raises or promotions for solving other peoples' problems).

If direct management got too frequently into "poor planning" situations ... it typically became time to revise the paradigm of the problem they were trying to address and apply a lot of KISS.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 22 Jan 2002 23:29:29 GMT
Anne & Lynn Wheeler writes:
i'm not sure i ever put any free overtime because of direct poor planning by management. I put in a lot of free overtime because of possibly poor planning by the infrastructure ... but it was on projects that I wanted to do ... not management. It did complicate matters if I had con'ed random other organizations into letting me work on their problems ... things that my management had nothing at all to do with (you typically don't get raises or promotions for solving other peoples' problems).

ok, i did it once. I was an undergraduate and the computer center director and the ibm branch manager were playing politics. One afternoon as part of politics, the director told the branch office that they had to remove the 2301 the next day. I had to spend all of 2nd and 3rd shift rebuilding the os/360 system to get sys1.svclib off the device.

I then really made myself a pain, i called a meeting in the director's office with the branch manager and told them that whatever their differences ... i would never do that again ... I needed at least two weeks notice for any future system rebuilds.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 23 Jan 2002 00:10:51 GMT
Anne & Lynn Wheeler writes:
ok, i did it once. I was an undergraduate and the computer center director and the ibm branch manager were playing politics. One afternoon as part of politics, the director told the branch office that they had to remove the 2301 the next day. I had to spend all of 2nd and 3rd shift rebuilding the os/360 system to get sys1.svclib off the device.

it was a standard release 9.5 build using the starter system. I was already somewhat miffed because the system build had been done by an ibm advisery se and one of the senior comp center staff using standard dedicated time over part of the weekend. Normally, I had the machine room all to myself for 48 hours straight from sat. 8am until mon. 8am .... but their system rebuild cut into one of my weekends.

When it came time to do the release 11 system build ... I tore apart the stage I & stage II sysgen process and rebuilt it so that i could do it during the day in the standard operational production system rather than having to do it using dedicated time off-shift (with the starter system; leaving the weekends free to do stuff I wanted to do). This also gave me a chance to re-organize the order of the build so that I could have optimal placement of files & members to reduce avg. arm seek operation.

random refs:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001d.html#48 VTOC position
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002.html#5 index searching

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Question about root CA authorities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about root CA authorities
Newsgroups: sci.crypt
Date: Wed, 23 Jan 2002 05:02:27 GMT
Erwann ABALEA writes:
Once you got to this point, why not going further? Since the CA is compromised, why should the CRL be valid? After all, the CRL is only valid until the CA gets compromised...

Since the trust attached to the CA doesn't come from a digital signature, then there's no mean to remove this trust with another digital signature.

That's precisely what David wrote (if I understood it correctly). The trust attached to the root key has to come from an off-band way.


note that the credit card industry has coped with this problem before.

basically the design point for the whole certificate infrastructure is an "offline" environment where the relying parties had no online/neartime access to the certification authority and therefor must trust the "credentials". The analogous solution in the '60s credit card world was the monthly booklets of invalid "credentials" (the identifying numbers on those little pieces of plastic).

Note that in the '70s, the credit card business recognized that the offline solution didn't translate to the online world and went with online/neartime transactions .... and eliminated trying to force fit an offline paradigm into an online infrastructure ... aka rather than trying to distribute tens of millions of electronic invalid account number booklets everyday ... perpetrating the offline paradigm analogy ... they instead went to online transactions.

The (credential) pieces of plastic still looked the same ... but the embossed number (as a credential) was augmented with the magnetic stripe for doing real online transactions in an online paradigm ... rather than trying to force fit an offline paradigm into an online world (which has been the CA, CRL scenario ... attempting to make a basically offline design point contorted into a poor semblance of an online operation).

random refs:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM SHRINKS by 10 percent

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM SHRINKS by 10 percent
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 23 Jan 2002 17:35:01 GMT
bblack@FDRINNOVATION.COM (Bruce Black) writes:
Lets be fair, it is hard to find a company today that has the kind of loyality to its employees that used to be common 30 years ago (I happen to work for one, but Innovation is exceptional). I understand that even a lot of Japanese companies, who used to be famous for "jobs for life", no longer have that attitude.

i believe ibm had hit a peak world-wide employment approaching 500k.

I had done a simple spreadsheet circa 1981 that showed that given the trends commoditizing various aspects of the computing industry that IBM would have to shrink to at least half that size. I believe that the year after IBM went into the red ... they were down around 200k (they have since been some ups & down in the number with various acquisitions, spin-offs and other programs).

In the mid '80s there was enormous investment in plant expansions ... apparently anticipating that the companies growth could continue to double almost indefinitely (aka there was no possibility that the computing industry was anywhere close to market saturation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM SHRINKS by 10 percent

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM SHRINKS by 10 percent
Newsgroups: alt.folklore.computers
Date: Thu, 24 Jan 2002 20:12:11 GMT
lwinson@bbs.cpcn.com (lwin) writes:
IBM's strength from day one has been in effective utilization of machines to solve problems. While its army of support personnel were called "salesmen", they were in reality highly trained and competent systems experts.

Prior to june 23, 1969 ... there tended to large arrays of "system engineers" at customer accounts helping with all sorts of things (and also helping the sales people keep on top of the customer requirements and needs). june 23, 1969 was unbundling where services and people and hardware all had to be separately priced.

As a result, a large amount of the ubiquitous presense of the system engineers started to evaporate since customers got billed for time system engineers spent doing stuff for the customers. this also lowered the number of system engineers in the ranks (lower demand) but also the quality of the system engineers. A significant percentage of system engineer training was based on being part of a (large) team at customer sites ... effectively learning the real world "on the job". I remember as an undergraduate that for a couple years, IBM would rotate a crop of new system engineers through our shop every six months and I got to teach them some of the things that I had developed.

This had significant long term downside effects on skills and solutions. For instance, in the '60s a large number of applications solutions were developed in customer shops in response to real live customer needs. After June 23, 1969 the tight coupling between large body of skilled people looking to satisfy customer requirements in daily close proximity with real world customer situations dried up.

Many of these "in situ" solutions evolved into major significant products and are still around today. At one time it seemed like nearly every major application solution had evolved out of some customer (or internal) data processing shop. The joke became that the customer shops were the development environment and the corporate organizations that were called "development" groups were in fact, maintenance organizations (a little name inflation). These origanizations were frequently "catchers" for applications developed in customer environments and became responsible for maintaining these applications (but never participated in the original development).

The skill downgrading started on june 23, 1969 continued until most system engineers were little more than glorified telephone number lookup ... i.e. the system engineer listen to the customer's question and then could find the number to call to get the answer.

i helped support HONE ... which all the (mainframe related and various other) "field" personel used (sales people, system engineers, field engineers, etc). basically online system access for every branch office in the US and eventually the world. Starting with the 370 115/125, ordering mainframes was so complex that the sales people were required to use a HONE application to generate the order.

misc. past comments on june 23rd
https://www.garlic.com/~lynn/98.html#42 early (1950s & 1960s) IBM mainframe software
https://www.garlic.com/~lynn/99.html#29 You count as an old-timer if (was Re: origin of the phrase
https://www.garlic.com/~lynn/99.html#30 You count as an old-timer if (was Re: origin of the phrase
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/2001c.html#18 On RC4 in C
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001l.html#30 mainframe question

misc. hone (& apl) references:
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First DESKTOP Unix Box?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First DESKTOP Unix Box?
Newsgroups: alt.folklore.computers
Date: Sat, 26 Jan 2002 20:23:41 GMT
Pete Fenelon writes:
(I think the SUSE distro we bought to install a couple of servers at work was six CDs or a fairly full DVD. Shocking. My first Linux system (0.12 with Poe's IGL and pretty much everything that had been ported) sat comfortably in 30MBytes at the end of an old 85 meg IDE disc. and had room to work in. Oh, and it ran in 4 meg!) -- which was of course twice (or was it four times) the RAM of the VAX-11/750 that could support 30 of us at once a few years before :/

or CP/67 running in 768k real storage (104 4k pages for paging after fixed kernel requirements), 75 users, mix-mode operation, interactive, program development, test, apl modeling (sort of stuff frequently done w/spreedsheets today) and various kinds of batch, 95 percentile subsecond response time for trivial interactive operations (and I believe possibly somewhat slower processor than 750?).

random refs:
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001l.html#6 mainframe question

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

windows XP and HAL: The CP/M way still works in 2002

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: windows XP and HAL: The CP/M way still works in 2002
Newsgroups: alt.folklore.computers
Date: Sun, 27 Jan 2002 11:53:53 GMT
Brian Inglis writes:
IBM uses HAL in AIX and OS/2, from which MS copied it into WIndows NT, which ran on x86, Alpha, PPC -- you should know better than to credit MS with ideas -- MS has never had an original idea in its entire existence.

AFAIK the HAL abstracts the low level kernel layers for CPU startup, memory management, interrupt and task handling, whereas the BIOS abstracts the handling of I/O devices. As CP/M was written in Z80 assembler, there was not much point in a HAL, but with all HLL OSes, there are always a few machine dependent functions that need to be redone for each architecture in assembler.


PC/RT was originally going to be a office products offering as a displaywriter following. It was going to run the CPr operating system and everything was written in PL.8. The 801 used for this target was designed with a number of hardware/language/system trade-offs. The operating system ran with no protection domains all the program protection checking being done at compile and bind. One of the trade-offs were extremely limited number of different virtual shared objects concurrently in the same virtual address space. The justification was that inline application code could switch the "segment" registers as trivially as any general register contents could be switched (no need to need protection domains and authorization checking for performing virtual address pointer operations).

To some extent that resulted in the claim that the PC/RT (and later RS/6000) were 40-bit (and later 56-bit) virtual address machines. Applications addressing is typically address-register+displayment. The PC/RT sort-of prefixed the 12-bit segment register id to the virtual address for a "real virtual address" ... and since the application could as trivially change a segment register as they could change an address register ... then the total virtual address space directly available to the application was a combination of the segment register ID (bits) and the bits from address-reg+displacement.

I believe the analysis was that while the max. number of displaywriter stations that could be connected to the box ... resulted in a fairly attractive price/seat; the smallest entry level price for the box was larger than the maximum customer displaywriter configuration. As a result the project was canceled and the organization potentially was going to be disbanded.

Some analysis was done that there was this emerging(?) unix technology that could be adopted quickly to any platform with minimum of time and cost, shipping a product to customer. IBM had already contracted for one such port for the PC (PC/IX).

It was decided to contract with interactive to do a similar port to this display writer follow-on. However, there was still the question of what to do with the organization and all the PL.8 programmers. The solution was to state that all of these PL.8 programmers already knew the hardware specifics and that they could implement in PL.8 a virtual machine hardware abstraction layer (the VRM) for Interactive to port to and that would significantly reduce the elapsed time it would take to do the port (than if Interactive had to learn the low-level hardware interface for doing the port). The other, sort of justification was the CP/67 & VM/370 virtual machine hypervisor example (originating in the mid'60s) ... however CP/67 & VM/370 conformed to the "real" machine architecture ... while the VRM was a higher level abstraction.

Note that while the hardware had been designed with a closed operating system and being able to do inline virtual address segment register manipulations directly inline application code (as simply as general registers were manipulated), the unix port required that such things be moved into the kernel because of the requirement to do security, access & authorization validation. And of course, since virtual segment management was now in the kernel and not directly controllable by the application ... the 40-bit (and later 56-bit) virtual address claim was even more contrived.

It turns out that the port for the VRM (ignoring the significant increase in total resources), significantly incrased the elapsed port time as well as created a whole new non-standard device driver culture. Special, non-standard unix device drivers had to be written to the VRM abstraction layer, and then a separate device driver had to be written for the VRM.

A later port of BSD4.3 to the bare PC/RT hardware (called AOS) was done in substantially less time than it took to create the original AIX offering (supporting the statement that the Interactive port to the VRM abstraction layer took much longer than if they would have learned the low-level hardware and did that port).

A port of PICK to the PC/RT VRM abstraction layer ... did show that PICK and AIX could run simultaneously on the same machine using the VRM as a (abstract) virtual machine hypervisor.

random refs:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#65 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2000f.html#74 Metric System (was: case sensitivity in file names)
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001f.html#0 Anybody remember the wonderful PC/IX operating system?
https://www.garlic.com/~lynn/2001f.html#1 Anybody remember the wonderful PC/IX operating system?
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001f.html#45 Golden Era of Compilers
https://www.garlic.com/~lynn/2001h.html#74 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#50 What makes a mainframe?
https://www.garlic.com/~lynn/2001n.html#55 9-track tapes (by the armful)
https://www.garlic.com/~lynn/2002.html#17 index searching

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

windows XP and HAL: The CP/M way still works in 2002

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: windows XP and HAL: The CP/M way still works in 2002
Newsgroups: alt.folklore.computers
Date: Mon, 28 Jan 2002 03:21:29 GMT
"Rupert Pigott" writes:
Looking back on cool ideas which I think were pretty original and non-obvious, I'd say that Virtual Memory is the best example I can find. I'm fairly sure you guys have a ton of examples you can think of. :P

virtual machines?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

bzip2 vs gzip (was Re: PDP-10 Archive migration plan)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Mon, 28 Jan 2002 15:26:54 GMT
robert writes:
MIME does NOT rely on the 'extension' part of a file-name to determine processing. the file name, and the 'content type' are explicitly two entirely separate entities, per the RFC standard. MIME is also concerned, primarily, with the _current_ use of a file, and how to display it. When editing, _as_source_, a web-page, the appropriate content-type is "text/plain", since one does -not- want it "rendered".

misc mime rfcs

goto
https://www.garlic.com/~lynn/rfcietff.htm

and click on Term (term->RFC#)

& scroll down to "MIME" acronym

multipurpose internet mail extensions (MIME )
see also mail
3218 3217 3204 3185 3183 3156 3126 3125 3073 3058 3047 3030 3023 3016 3009 3003 2987 2984 2978 2958 2957 2938 2936 2927 2913 2912 2910 2876 2854 2797 2785 2781 2738 2703 2652 2646 2634 2633 2632 2631 2630 2586 2565 2557 2534 2533 2530 2518 2506 2503 2480 2442 2426 2425 2424 2423 2422 2392 2388 2387 2376 2346 2318 2312 2311 2305 2302 2301 2298 2294 2293 2278 2231 2220 2184 2183 2164 2163 2162 2161 2160 2159 2158 2157 2156 2152 2130 2112 2111 2110 2083 2077 2049 2048 2047 2046 2045 2017 2015 1927 1896 1895 1894 1892 1874 1873 1872 1848 1847 1844 1837 1836 1830 1820 1767 1741 1740 1641 1590 1563 1556 1523 1522 1521 1496 1437 1428 1344 1341 1049

clicking on any of the RFC's puts up summary of that RFC in the lower frame.

clicking on the ".txt" field in the RFC summary, retrieves the actual RFC

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First DESKTOP Unix Box?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First DESKTOP Unix Box?
Newsgroups: alt.folklore.computers
Date: Mon, 28 Jan 2002 15:45:18 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
VM/370r6 is, too:

The VM shop I last worked closed in May 93. Was r6 pre or post that? Those version numbers get jumbled as time goes by. VM/CMS had just been given enhanced file structure beyond F-name, F-type, F-mode but I never got to use it.


VM/370 R6 was from the late '70s. It lived on a lot longer. It was chosen as the basis for the 3090 service processor ... first was going to be a 4331 and then upgraded to a pair of 4361 (for redundancy). There was a development group working on it that was larger than the original vm/370 cp&cms development group doing the enhancements, applications and modifications to support the service processor functions.

field service required a boot-strappable diagnostic process done in the field starting with a "scope". The 3090 (& 3081 before it with uc.5 service processor) wasn't directly "field scope'able" ... while the 4361 was. The idea was field service could bootstrap dianostic by scoping any problems with the 4361 (if necessary) ... and then using the 4361 service processor functions (that had lots of probs into all parts of the 3090) to diagnose the 3090. The original 4331 was upgraded to pair of 4361s as service processors to make it even less likely diagnosing 4361 was required.

random refs:
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#62 Living legends
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#110 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#66 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000c.html#83 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#44 Golden Era of Compilers
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002.html#48 Microcode?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does it support "Journaling"?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does it support "Journaling"?
Newsgroups: comp.arch.storage
Date: Mon, 28 Jan 2002 16:32:39 GMT
lindahl@pbm.com (Greg Lindahl) writes:
BTW, the biggest Linux distributions have shipped journaling filesystems for a short while (ext3 in RedHat), and most commercial Unixes have shipped journaling filesystems for many years.

I'll avoid speculating about your attitudes; is assuming simple stupidity better than malice?


the first (control or metadata) journaling unix filesystem (that i know of) was rs/6000 aix in the late '80s. it was implemented using rios database memory where all the filesystem metadata was mapped into region of memory that tracked memory change at the 128(?)byte line level. Filesystem then had "commit" operations inserted at appropriate places. The commit call then involved scan of the filesystem metadata memory region searching for "lines" of storage marked as changed. All the changed lines where gathered up and written to the log. Recovery consisted of reading the log and "rolling-forward" the actual filesystem metadata (and little things were fixed up like non-logged/commuted metadata couldn't be written to disk).

A portable version of this implementation was done where explicit calls to logging routine were inserted into all the places that modified metadata. There was a claim that even on the same rs/6000, the explicit calls to logging routine was more efficient implementation than the scan of filesystem metadata memory at commit time.

in this case, journaling filesystem is somewhat misnomer.

process is log & commit of filesystem metadata.

log records cycle ... when all cached data involved with log records are known to be written to disk, the log records are made available for re-use. use of a log "file" tends to be relatively small ... continuely using the same log records.

journals normally are long-term recording of all changes. a database journal may be able to go back several days and be able to determine who caused what changes. a log typically fulfills need to write to disk in efficient manner the necessary information to maintain consistency.

A single logical change might involve changing several records on disk and it is impossible to reflect the change as a single atomic disk write.

A roll-forward log will write all the pending changes to a log before starting the write of the actual records to disk. After all records have been successfully written to disk, some kind of marker goes into the log indicating that operation was successful. A recovery operation is done by reading the log, finding all uncompleted changes and re-updating the corresponding records & writing them back to disk.

A roll-back log will write the unmodified version of data being changed before beginning the actual writes of records to disk. Recovery then consists of updating the correspond records involved in pending operations to return them to their unmodified state.

A filesystem metadata log doesn't guarantee consistency of file or database information. A filesystem metadata log can guarantee the consistency of the filesystem metadata.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does it support "Journaling"?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does it support "Journaling"?
Newsgroups: comp.arch.storage
Date: Mon, 28 Jan 2002 16:39:09 GMT
Anne & Lynn Wheeler writes:
the first (control or metadata) journaling unix filesystem (that i know of) was rs/6000 aix in the late '80s. it was implemented using rios database memory where all the filesystem metadata was mapped into region of memory that tracked memory change at the 128(?)byte line level. Filesystem then had "commit" operations inserted at appropriate places. The commit call then involved scan of the

it is also what my wife and I depended on for HA/CMP (High Availability Cluster Multi-Processing) ... as part of fast recovery/take-over in various cluster configurations.

misc. ref:
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/subtopic.html#hacmp

you would be surprised at the people that are violent supporters now ... who at the time were in violent opposition that you could do such clustering and availability on non-mainframe components.

random refs:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#36 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000b.html#45 OSA-Express Gigabit Ethernet card planning
https://www.garlic.com/~lynn/2000b.html#80 write rings
https://www.garlic.com/~lynn/2000b.html#85 Mainframe power failure (somehow morphed from Re: write rings)
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2000g.html#43 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#47 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#41 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001f.html#58 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#10 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

bzip2 vs gzip (was Re: PDP-10 Archive migration plan)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Mon, 28 Jan 2002 20:31:44 GMT
robert writes:
For MIME 'magic' to work, one must carry around some _meta-data_ describing the file content. In almost every environment I'm familiar with, said meta-data is external to the file. The exception being Apple's MacOS, where the meta-data is stored in the 'resource fork' of each file. And, quite frankly, "damnifiknow" _how_ a Mac-based ftp appliation (client _or_ server) handles the resource-fork info, particularly when talking to a _non-MacOS_ application.

one of the early things that CSC did was to continuously gather performance and load data and the first record written to the file was the "metadata" description (names and field formats of the subsequent records). These files were archived to tape (or written directly to tape) and years later it was possible to do multiple year analysis of various performance issues.

I would claim that philosophy at least partially contributed to GML (tagged markup language) being invented subsequently at CSC in the late '60s (spawning the MLs, SGML, HTML, XML, etc).

misc. csc:
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

windows XP and HAL: The CP/M way still works in 2002

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: windows XP and HAL: The CP/M way still works in 2002
Newsgroups: alt.folklore.computers
Date: Mon, 28 Jan 2002 23:56:24 GMT
mwojcik@newsguy.com (Michael Wojcik) writes:
There isn't any HAL-like layer for AIX on the RISC System/6000. AIX for the RT (AIX 2) ran atop a hardware virtualization layer called the VRM, as Lynn described in another message in this thread. AIX/370 and AIX/ESA ran under VM (could they run directly on the hardware, without being hosted by VM?). AIX 1 (for the PS/2) and AIX 3-5 (for the RS/6000) ran directly on the hardware.

aix/370 & aix/ps2 were locus ports to both platforms (supposedly being sort of the "SAA" for unix, aka network file system, with both partial and full local file caching, process migration, disimilar architectures, etc).

aix/370 & aix/esa primarily under VM, I believe was a field service issue .... vm provided a lot of various kinds of error recovery, recording, and reports in specific format required by field service people before they would support the hardware in the field. issue being the cost/benefit of putting all that stuff into a unix platform vis-a-vis running under vm.

there were vestiges of vrm laying around in various places in "AIX V3" (but in general, it was suppose to be all gone).

they are now touting 40,000-plus copies of linux running under vm on single mainframe.

and the random references:
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2001.html#44 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#50 What makes a mainframe?
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002b.html#29 windows XP and HAL: The CP/M way still works in 2002

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Poor Man's clustering idea

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Poor Man's clustering idea
Newsgroups: comp.arch.storage
Date: Tue, 29 Jan 2002 03:26:42 GMT
danco writes:
When are Windows and the various UNIX OS'en out there going to implement real clustering where all hosts can simultaniously mount and transparently share all devices with no conflicts? VMS has had that since, what, 1983?

360s have had clusters since, what, the mid '60s (referred to as "loosely-coupled", to distinguish from "tightly-coupled" ... typically shared memory ... or "closely-coupled" ... various other kinds of specialized coupling hardware).

when my wife and I did ha/cmp we got lots of input from a couple of the database systems that also ran in vms cluster as to what not to do (she also did stint in POK responsible for "loosely-coupled", cluster by any other name). about the same time as she was in pok, I worked on what was considered the largest "single-system-image" system in the world (a type of large cluster for its time in the '70s).

a couple specific vax & ha/cmp cluster related posts
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP

random refs;
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#21 Too much data on an actuator (was: 3.5 inch 9GB )
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/94.html#15 cp disk story
https://www.garlic.com/~lynn/94.html#19 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
https://www.garlic.com/~lynn/94.html#31 High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/95.html#11a Crashproof Architecture
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#36 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe)
https://www.garlic.com/~lynn/97.html#14 Galaxies
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/98.html#30 Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/98.html#58 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#182 Clustering systems
https://www.garlic.com/~lynn/99.html#183 Clustering systems
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/99.html#185 Clustering systems
https://www.garlic.com/~lynn/99.html#186 Clustering systems
https://www.garlic.com/~lynn/2000.html#31 Computer of the century
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#14 thread scheduling and cache coherence traffic
https://www.garlic.com/~lynn/2000c.html#19 Hard disks, one year ago today
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#7 Ridiculous
https://www.garlic.com/~lynn/2000e.html#8 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#9 Checkpointing (was spice on clusters)
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2000g.html#38 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#43 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#36 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001b.html#11 Review of the Intel C/C++ compiler for Windows
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2001b.html#15 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#60 monterey's place in computing was: Kildall "flying" (was Re: First OS?)
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#26 why the machine word size is in radix 8??
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#14 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001e.html#41 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001f.html#21 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#39 Ancient computer humor - DEC WARS
https://www.garlic.com/~lynn/2001f.html#60 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001g.html#43 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#43 Disaster Stories Needed
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#43 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2001k.html#54 DEC midnight requisition system
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2001k.html#65 SMP idea for the future
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
https://www.garlic.com/~lynn/2001n.html#17 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#50 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2002.html#0 index searching
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#25 ICMP Time Exceeded
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002b.html#34 Does it support "Journaling"?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"war-dialing" etymology?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "war-dialing" etymology?
Newsgroups: alt.folklore.computers,alt.2600
Date: Tue, 29 Jan 2002 16:22:16 GMT
Jim writes:
Not to mention that "War Games" brought us the auto-dial acoustic modem! :-)

and the scene in war games on the ferry was actually from steilicom to anderson is (in puget sound) .... and that ferry has since been retrofitted as tourist bout on lake washington out of kirkland (one of the "high" spots is by bill's place)
https://www.garlic.com/~lynn/2000d.html#39

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 5100 [Was: First DESKTOP Unix Box?]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 [Was: First DESKTOP Unix Box?]
Newsgroups: alt.folklore.computers
Date: Tue, 29 Jan 2002 16:26:35 GMT
aw288@osfn.org (William Donzelli) writes:
Maybe I am a little more confused here, but I think you are starting to "put words in my mouth". All I said was that the 5100 was not emulating the S/360 architecture. It may, however, be used to communicate with a machine with an S/360 architecture. You can not expect to run S/360 code on a 5100. Maybe the 5100 APL is based on the S/360 APL, but that is about it.

earlier version of the 5100 ran 1130/apl. the "palm" processor in the 5100 ran 360/apl.

random rfs:
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#46 A new "Remember when?" period happening right now
https://www.garlic.com/~lynn/2001b.html#45 First OS?
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Poor Man's clustering idea

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Poor Man's clustering idea
Newsgroups: comp.arch.storage
Date: Tue, 29 Jan 2002 17:02:07 GMT
young_r@encompasserve.org (Rob Young) writes:
Just out of curiosity (as you sometimes cut and paste this list of random refs, and they make for good background) . . . but from the one above... I'm a bit puzzled. Did Al Gore also invent clustering? True clustering or fallover?

normal thread drift :-)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"war-dialing" etymology?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "war-dialing" etymology?
Newsgroups: alt.folklore.computers,alt.2600
Date: Tue, 29 Jan 2002 22:25:14 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
So Bill's place is next to the infamous Steilacoom nuthouse, then? Figures, I suppose...

steilacom is on puget sound ... sort of west of ft. lewis & south of tacoma.

ferry is now refurbished & running as tourist boat on lake washington out of kirkland ... east of seattle

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 30 Jan 2002 17:52:12 GMT
Bernd Paysan writes:
Aaahh! Meetings are for people who can't handle their communication outside a meeting, where it could be handled much better. Meetings are for people who can't read, can't understand what other people tell them (so that they have to ask again and again), and for people who don't know how to proceed (so in a meeting, they could let someone else do that). Unlike a presentation (with a 1:n communication), meetings are mostly 1:1 communications on a shared bus. Since people who contribute least produce the highest traffic (questions+answers), meetings are inherently inefficient. The only way to top their inefficiency is if you promote the most efficient engineer in a group to supervisor, and lock him away from the actual development. Ideally, you make sure that his only communication mean to actual development is by long meeting hours. That dooms your project, and makes sure that a lot of money is sunk (and the more money you sink, the more likely you are promoted).

isn't there a dilbert thing on meetings; that they are for justifying jobs for people that otherwise don't have anything to do.

i remember hearing some time ago from a manager that observed ... that if they spent 90 percent of their time supporting the most productive member in the group ... then the productivity of the group doubled. however, the observation was that typical manager time is consumed with things having little to do with productivity and supporting the least productive members of the group (somewhat tending to bring group productivity down to the lowest common denominator).

I remember see some sort of calculation group IQ ... somewhat related to productivity (including meetings):


objective:          sum(IQi), i=1,n   .... i.e. group IQ is equivalent to the
sum of the individual member
IQs

theoretically true  max(IQ1, IQ2, ..., IQn)  group activity is proportional
to the brightest member

possible            sum(IQi)/n ... i=1,n       ... group iq is the avg. of the
individual IQs

more likely         max(IQ1, IQ2, .., IQn)/n   ... group IQ is the max. of the
brightest, divided by the
                                                 number of members

observed            min(IQ1, IQ2, ..., IQn)/n   ... group IQ is the min. of the
least bright, divided by
                                                  the number of members

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 5100 [Was: First DESKTOP Unix Box?]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 [Was: First DESKTOP Unix Box?]
Newsgroups: alt.folklore.computers
Date: Thu, 31 Jan 2002 16:11:32 GMT
aw288@osfn.org (William Donzelli) writes:
This is interesting. How much gets emulated? Obviously, not the whole thing, otherwise you could run S/360 code on the machine.

I don't know the details for 5100/PALM ... but for 360 there is non-privilege application code, privilege code, and then all the I/O infrastructure (and implementing all the i/o infrastructure could be quite a large effort). Various "360" machines didn't even implement all non-privilege instructions ... like not implementing decimal instructions (and emulating them).

The
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/
page doesn't go into details of the 360 subset.

However, note that apl/360 included the apl environment support and interpreter as well as a small multi-tasker and "swapper" (move apl workspaces into & out of memory). CSC do the work to create cms/apl (single user), just the environment & interpreter running under cms w/o need for the multi-tasker & swapper stuff (provided by cp). I don't have any of the details ... but the 5100 would have been much closer to cms/apl subset than original apl/360.

Bu comparison, the XT/370 "processor" implemented all the non-privilege instructions ... but only a subset of the privilege instructions (and none of the I/O infrastructure). It required a custom modified version of VM/370 ... which would do I/O via message passing to CP/88 running on the 8088.

SLAC had done a sub-set 360 ... where they had bit-slice computer with enuf non-privilege 360 instructions to execute fortran application ... place one at each of the data collection points for doing initial data reduction. I recollect it was referred to something like the 168E, aka they would run fortran application at 370/168 thruput.

random refs:
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#53 S/370 PC board
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?]

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PDP-10 Archive migration plan

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PDP-10 Archive migration plan
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Sat, 02 Feb 2002 19:55:05 GMT
Iain Hardcastle - ih writes:
You should check the Hercules guys, they've been talking of a similar thing for the IBM Mainframe emulator.

there's a yahoo group called hercules-390 or similar.


the whole virtual machine stuff has gone thru this cycle several times since the mid-60s. starting out with CP/40 kernel. It was minimum function but well tightly coded kernel as well as minimal real storage requirements. The port to CP/67 started bloat in the kernel as well as minimum real storage requirements started to baloon. I made parts of the CP/67 kernel pageable (while still undergraduate, the summer I was doing the stuff for BCS) to cut down on its real storage requirements (it didn't actually eliminate some of the bloat, but it minimized the real storage requirements if those functions weren't being used).

In the CP portion port to 370 (as part of VM/370), the pageable kernel became part of the standard product. I did get out as part of the resource manager in the mid-70s, the kernels ability to "page" a lot of the per virtual machine control datablocks (starting to consumer large real storage requirements in large multi-user configurations).

The VM/370 CP kernel went thru pretty good bloat all thru the 70s, 80s, and 90s. I had done one project trying to move significant function out of the base kernel into virtual address spaces (for instance the SFS activity moving majority of the unit record/spool emulation suport of the fixed kernel and into a virtual address space).

The othe mimilist effort that sort of forked off ... was basically moving a stripped down version of the CP function into the "base hardware" (and the service processor) call LPARS (or logical partitions ... aka simplified virtual machines subset). This is now found on all those mainframe machines ... and probably the vast majority of customers are now running their operation configured with at least some set of LPAR functions. LPARS, in effect, has a large subset of virtual machine emulation part of the base machine microcode and the configuration management for the LPAR (virtual machine) options are in the machine's service processor.

Doing micro-kernels well is almost the opposite of current development efforts ... which is focused on constantly adding feature/function. A well done micro-kernel is a very strong KISS effort with strong conformance to architecture and delveopment ground rules.

However, a well done micro-kernel is also frequently self-defeating. Frequently, individuals that subsequently work on adding feature/function find that the absolutely easiest way of adding something is some hack on a minimilist, KISS kernel (in part because it is frequently easier to understand and therefor modify). However, Q&D hack modifications to a minimilist, KISS kernel turns it into just another bloated operation.

While it may be counter-intuitive, it is frequently more difficult to do something in a simple (consistent, KISS) manner than it is to do something in a extremely complex manner.

random refs:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#2 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#18 location 50
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#0 Multitasking question
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/94.html#12 360 "OS" & "TSS" assemblers
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/94.html#37 SIE instruction (S/390)
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#48 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/95.html#12 slot chaining
https://www.garlic.com/~lynn/96.html#9 cics
https://www.garlic.com/~lynn/96.html#12 IBM song
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/97.html#10 HELP! Chronology of word-processing
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/98.html#3 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#14 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/98.html#29 Drive letters
https://www.garlic.com/~lynn/98.html#32 Drive letters
https://www.garlic.com/~lynn/98.html#33 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/98.html#47 Multics and the PC
https://www.garlic.com/~lynn/98.html#52 Multics
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#9 IBM S/360
https://www.garlic.com/~lynn/99.html#10 IBM S/360
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#38 1968 release of APL\360 wanted
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#53 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#62 Living legends
https://www.garlic.com/~lynn/99.html#85 Perfect Code
https://www.garlic.com/~lynn/99.html#86 1401 Wordmark?
https://www.garlic.com/~lynn/99.html#87 1401 Wordmark?
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#122 Computer supersitions [was Re: Speaking of USB ( was Re: ASR 33 Typing Element)]
https://www.garlic.com/~lynn/99.html#135 sysprog shortage - what questions would you ask?
https://www.garlic.com/~lynn/99.html#139 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#142 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#149 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#1 Computer of the century
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#81 Ux's good points.
https://www.garlic.com/~lynn/2000.html#82 Ux's good points.
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#49 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#77 write rings
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000c.html#42 Domainatrix - the final word
https://www.garlic.com/~lynn/2000c.html#49 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000e.html#0 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#16 First OS with 'User' concept?
https://www.garlic.com/~lynn/2000e.html#25 Test and Set: Which architectures have indivisible instructions?
https://www.garlic.com/~lynn/2000f.html#6 History of ASCII (was Re: Why Not! Why not???)
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#54 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#61 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#62 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#3 First video terminal?
https://www.garlic.com/~lynn/2001.html#17 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#21 First OS?
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#32 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#41 First OS?
https://www.garlic.com/~lynn/2001b.html#45 First OS?
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001b.html#55 IBM 705 computer manual
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#49 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#53 Varian (was Re: UNIVAC - Help ??)
https://www.garlic.com/~lynn/2001c.html#88 Unix hard links
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#7 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#15 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#20 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001e.html#53 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#57 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001e.html#71 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#78 HMC . . . does anyone out there like it ?
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001h.html#17 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#19 checking some myths.
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001h.html#34 D
https://www.garlic.com/~lynn/2001h.html#46 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#44 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#55 Computer security: The Future
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001k.html#3 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001k.html#29 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001l.html#43 QTAM (was: MVS History)
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#28 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#29 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#32 First DESKTOP Unix Box?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 5100 [Was: First DESKTOP Unix Box?]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 [Was: First DESKTOP Unix Box?]
Newsgroups: alt.folklore.computers
Date: Sun, 03 Feb 2002 15:55:15 GMT
Tom Kinch writes:
There was a plug-in board that ran a "370 processor". i don't remember all that was involved but it supposedly ran BAL. i tried to find one to look at but never SAW one.

original was xt/370 ... see some references posted to this thread
https://www.garlic.com/~lynn/2002b.html#43

for image scroll down melinda's page to personal/370 & personal/390
https://www.leeandmelindavarian.com/Melinda/
https://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

... the need for a Museum of Computer Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ... the need for a Museum of Computer Software
Newsgroups: alt.folklore.computers
Date: Sun, 03 Feb 2002 16:35:06 GMT
jmfbahciv writes:
HTML is a PC user's answer to RUNOFF. It's only function is to output text in pictures.[1] Now, one of the nice things about this text organizer is the way it cross-references. I could have used that when "designing" how we organized the bits that produced DEC's documentation.

stu madnick did script for cms using "dot" commands (ctss runoff) in mid-60s at csc. It was used for all the cp/67 & cms documentation ... and late 360 principles of operation used it. "G", "M", & "L" (primarily "G"), at CSC did the GML addition to script late '69 or '70 (script supporting both "dot" command mode and gml tag mode). GML (aka generailized markup language) was actually play on the initials of the three people that originated & developed it.

By mid'70s a large percentage of company documentation was being done gml (somebody in bit.listserve.ibm-main posting observed that during '70s & '80s the company was the second largest publisher after the US gov.).

gml eventually begate sgml (iso standard, standard generalized markup language). original gml tags were of the form ":p." which became "<p>" in sgml.

large vm/cms (& script) installation starting in '70s were both SLAC and their "sister" location "CERN" (origin of HTML, another markup language).

There was also port of SCRIPT to PC (script/pc?) done in very early '80s, I believe by LA-area "SE" that shipped on at least Tandy computers.

sometime in the '70s univ. of waterloo (another large vm/cms installation) came out with "waterloo script".

sample "dot" script (actually "easy" script, note ".ez on"; the "ampersand" commands) ... memmo from Ted Johnston at SLAC to members of VM Baybunch


.ez on
.ce VM Baybunch User Survey (draft)
.sk 2
&p.VM Baybunch is an informal organization, whose purpose is
to act as a technical information exchange and
information distribution forum for VM related material. Baybunch
holds monthly meetings, scheduling technical presentations,
technical discussions, and various talks that are of interest
to the San Francisco Bay area VM systems programmers.
&p.In order to better meet the needs of the bay area VM
technical community, we are interested in obtaining:
&b.member profiles
&b.organization profiles that members belong to
&b.Feedback on past presentations and meeting formats
&b.Suggestions on ways to improve Baybunch
&b.Specific topics and/or presentations that
members may be interested in seeing (or, especially, giving)
in the future.
&P.Please return an attachment with as much of the
above information as possible to Ted Johnston.
.sk 4
.fo off

from Melinda's papter VM and the VM Community: Past Present, and Future
https://www.leeandmelindavarian.com/Melinda/
https://www.leeandmelindavarian.com/Melinda#VMHist
Another key participant was a 21-year-old MIT student named Stu Madnick, who began working on CMS in June of 1966. His first project was to continue where Brennan had left off with the file system. Drawing upon his own knowledge of the CTSS and Multics file systems, Stu extended the design of the file system and got it up and running. He continued working part-time during the following school year and added several other important functions to CMS, including the first EXEC processor, which was originally called the COMMAND command. He had written a SNOBOL compiler for S/360, so he got that working under CMS, too. He needed a word processor to use to prepare papers for his courses, so he wrote Script, which was inspired by the CTSS Runoff program. Stu had been told that Dick Bayles (whom everybody acknowledged to be a brilliant programmer) had written the CMS editor in a week, so Stu wrote Script in a week. In 1968, he designed a new file system for CMS that anticipated important features of the UNIX file system, but that was never implemented. Stu was to continue working on CMS until 1972, when he finished school and had to get a real job. He is now a professor at MIT.

madnick, brennan, etc. refs:
https://www.garlic.com/~lynn/99.html#91 Documentation query
https://www.garlic.com/~lynn/2000e.html#0 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001g.html#54 DSRunoff; was Re: TECO Critique
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...

misc sgml refs:
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/97.html#9 HELP! Chronology of word-processing
https://www.garlic.com/~lynn/97.html#26 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#16 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#91 Documentation query
https://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#34 IBM 360 Manuals on line ?
https://www.garlic.com/~lynn/2000c.html#30 internal corporate network, misc.
https://www.garlic.com/~lynn/2000e.html#0 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2000e.html#23 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001c.html#88 Unix hard links
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#73 CS instruction, when introducted ?
https://www.garlic.com/~lynn/2001g.html#54 DSRunoff; was Re: TECO Critique
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#34 D
https://www.garlic.com/~lynn/2001h.html#46 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#1 History of Microsoft Word (and wordprocessing in general)
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#16 Disappointed
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001n.html#36 Movies with source code (was Re: Movies with DEC minis)
https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002.html#53 School Help
https://www.garlic.com/~lynn/2002b.html#35 bzip2 vs gzip (was Re: PDP-10 Archive migration plan)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 5100 [Was: First DESKTOP Unix Box?]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 [Was: First DESKTOP Unix Box?]
Newsgroups: alt.folklore.computers
Date: Sun, 03 Feb 2002 17:13:32 GMT
Tom Kinch writes:
There was a plug-in board that ran a "370 processor". i don't remember all that was involved but it supposedly ran BAL. i tried to find one to look at but never SAW one.

this was just forwarded to me ... it is a spoof with large number of pictures from the past in the gallaries
http://www.lindkvist.com/digitaldataporn/xxx.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

... the need for a Museum of Computer Software

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ... the need for a Museum of Computer Software
Newsgroups: alt.folklore.computers
Date: Mon, 04 Feb 2002 15:57:15 GMT
Anne & Lynn Wheeler writes:
... and late 360 principles of operation used it. "G", "M", & "L" (primarily "G"), at CSC did the GML addition to script late '69 or '70 (script supporting both "dot" command mode and gml tag mode). GML (aka generailized markup language) was actually play on the initials of the three people that originated & developed it.

one of the reasons that the mainframe "principles of operation" was one of the first pubs that went to script was the conditional feature. The POP was actually a subset of the larger "architecture" document (or "red book" being distributed in red 3-ring binders). Setting the script parameter controlled whether the whole document was printed or just the POP subset.

The red book had a bunch of engineering discussions, some model dependent issue discussions, and for newer instructions, justifications why they were included. Intermixed with all of this was the actual sections that appeared in the published POP.

It also included unannounced instructions. For instance 370 hardware virtual address relocate had half dozen or so instructions ... but when 370 relocate was announced, only a couple of the instructions actually made it. Original 370 hardware relocate had RRB, PTLB, IPTE, ISTE, & ISTO. The product announcement only showed up with RRB & PTLB.

RRB ... reset reference bit PTLB ... Purge (hardware) look-aside table IPTE ... (selective) Invalidate Page Table Entry ISTE ... (selective) Invalidate Segment Table Entry ISTO ... (selective) Invalidate Segment Table Origin

PTLB just cleared the hardware look-aside table. ISTO selectively cleared the look-aside of all entries for a specific virtual address space. IPTE & ISTE turned on the corresponding bit in the table entry in real memory and also cleared the look-aside of the related entry(s). Selective invalidate were also defined to operate consistently across all look-aside tables in a multiprocessor configuration.

The "red book" was something of a dynamic, evolving document picking up instructions and justification discussions along with possible engineering impacts for specific announced and unannounced hardware.

When CSC was trying to get compare&swap into the 370 architecture (primarily charlie ... since compare&swap was mnemonic chosen for charlie's initials CAS), we were told that a better multiprocessor instruction was insufficient justification to get it added to the hardware instructions, that a uniprocessor justification had to be made for the instruction. That was when CSC came up with the whole C&S methodology & programming notes section on how to manage lists & other data elements in multi-threaded, application level code (enabled for interrupts). Specific threads could get interrupted and the processor(s) could pretty arbritrarily switch thread execution. This was the whole idea of application-level, thread safe execution whether running in a single processor or multi processor configuration.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Grieving and loss was Re: Infiniband's impact was Re: Intel's 64-bit strategy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Grieving and loss was Re: Infiniband's impact was Re: Intel's 64-bit  strategy
Newsgroups: alt.folklore.computers
Date: Mon, 04 Feb 2002 19:25:10 GMT
Charles Richmond writes:
A foreign friend once told me that in America, the important thing is not getting something for yourself, but keeping someone else from having it. Aparently many of your cow-orkers believe that this is the way to go...

sometimes seen in large enterprise ... somebody will execute a contract with an outside entity ... even when there are superior internal resources. The use of external entities minimizes the sharing of (internal) corporate power with regard to projects (external resource isn't able to claim any credit within the internal corporate structure).

another attribute during some of the huge corporate growth in the 60s, 70s, & 80s ... was never making a mistake. There is some proverb that the only way of never making a mistake was to never do anything.

in several situations, corporate growth explosion happened regardless (or in spite) of individual or even group effort. as a result it became difficult to correlate success with competency. Lacking any method to measure competency ... corporate culture would fall back to things like never having made a mistake & being a really good team player as basis for rewarding and promoting people.

slightly related refs:
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
https://www.garlic.com/~lynn/99.html#231 Why couldn't others compete against IBM?
https://www.garlic.com/~lynn/2001j.html#33 Big black helicopters

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wylbur?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wylbur?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 04 Feb 2002 19:29:17 GMT
cross-posted to alt.folklore.computers where there have been some Wylbur discussions.

Duane Weaver writes:
I am wanting to make contact with any sites, particularly other universities, that converted their Wylbur users back over to TSO in the past few yrs. I am interested in knowing what problems you encountered in converting and teaching your users about TSO?

I would appreciate you contacting me directly at weaver.15@osu.edu.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"Have to make your bones" mentality

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Have to make your bones" mentality
Newsgroups: sci.crypt
Date: Mon, 04 Feb 2002 20:58:18 GMT
"Tom St Denis" writes:
I know someone in linguistics at the university of Ottawa. According to her she has to study english and french [as well as other courses].

Linguistics is the study of language.

Its not limited to broad topics. Studying one specific language is as much Linguistics as studying one branch of Math or form of music in Art.


i know somebody that got stanford Phd ... joint between language and computer AI ... by spending 9 months in the back of my office observing how I communicated (both face-to-face, telephone, going with me to meetings, etc) as well as studying copies of all of my email, instant messages, mailing lists, etc. during that 9 month period. Besides a thesis (quite a bit comparing/contrasting face-to-face with various non-face-to-face, verbel, written, etc).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

... the need for a Museum of Computer Software

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ... the need for a Museum of Computer Software
Newsgroups: alt.folklore.computers
Date: Thu, 07 Feb 2002 13:23:19 GMT
jmfbahciv writes:
Those posts were after I objected. I don't know how to describe the problem I see. Have you ever installed an operating system from scratch on bare bones hardware?

from cards

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Mon, 11 Feb 2002 19:10:38 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
But now with clusters (both tight like IBM Sps and loose like SETI@home) you will get even bigger address space problems. UUCP had the advantage of lacking really serious routing tables (you only had to know your neighbors): inelegant, but workable brute force. NCP was limited, I think to 64 nodes (was that right? that long ago). But Metcalfe at PARC was thinking of internets and 100s of machines.

not only NCP ... but also HASP/JES networking ... it was one reason that the vnet internal networking implementation was so successful ... aka the internal network was larger than arpanet until after the big change-over to IP circa 1/1/83 (say until about 85 sometime).

vnet didn't have network node limitation ... and it effectively had a flavor of "gateway" support for the beginning ... allowing HASP/JES nodes to be attached on the boundary thru a gateway.

Problem with the HASP/JES implementation ... slightly better than NCP ... was it used the 255 table for virtual HASP devices, After the definition of 30-80 virtual UR devices ... there was maybe 180-200 positions left for defining network nodes.

The NJE support had other problems ... it discarded outgoing email that it didn't know the destination for ... but it also discarded incoming email that it didn't recognize the origin. The header bit handling was also very fraile ... upgrading one NJE node to a new release could start spewing networking packets ... that would not only bring down networking software at other nodes ... but whole operating systems. One of the other reasons that straight NJE nodes were on the boundaries ... would that they would put in cannonical NJE translaters in the intermediate VNET gateways ... so that packets going to specific NJE nodes would have the headers formated in manner that they could tolerate.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Wed, 13 Feb 2002 03:43:09 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Yeah, but the BIG difference Lynn, was the chronology (the time). In 1969 your employer turned those guys down saying that the ARPAnet would not work.

Similarly, I was somewhat heartened to see Bitnet break away from the monolith view of SNA. On the other hand, their attempts at revisionism are laughable were it not that so many people believe them.


SNA was towards mid-70 ... it involved "NCP" (different NCP ... PU4 running on 3705) and VTAM (or SSCP, PU5 running on mainframe). pre-NCP ... 3705 had straight line controller.

There is some contention that the much of the PU4/PU5 is reaction to the work I was involved in as undergraduate where we built our own IBM controller (and we get blamed for originating the PCM controller business). random refs:
https://www.garlic.com/~lynn/submain.html#360pcm

origins of vnet predate SNA ...

there is also the story of somebody in corporate hdqtrs when getting a presentation of the internal network in the mid-70s ... claimed that it was impossible (not that it wouldn't work but that it was actually impossible). The story goes that the people in corporate hdqtrs "knew" that a fully distributed, asynchronously operated network (aka vnet, as opposed to centrally controlled infrastructure ala SNA) would have taken 3-4 order magnitude more resources to develope than the company had in total assigned to telecommunication related activities. Since these people were in corporate hdqtrs they had access to all significant project and budget activities and there was no such unaccounted for resources. The concept that effectively a single person could have pretty much pulled the whole thing off was incomprehensible to these people in corporate hdqtrs.

>From the standpoint of these peopqle ... the probability of arpanet working (given it significantly larger budget and other resources) was actually significantly higher than the probability of the internal network existing at all (as well as being larger than arpanet) ... the architecture, design, code, & test primarily being the work of a single person.

my wife also created turf war ... for a while she was part of overall architecture group that had written architecture for peer-to-peer fully distributed ... which caused lots of ruffled feathers with the "SNA" group aka really a centrally controlled telecommunication application for centrally managing large number of terminals). SNA didn't even include the concept of a network layer/function ... so the use of the "N" was quite a misnomer ... in fact that could also be said of the "S" and the "A".

She got then responsibility in POK for loosely-coupled architecture (aka cluster) ... where she originated Peer-Coupled Shared Data (effectively basis for ims hotstandby and now sysplex, misc other stuff). The turf war than turned into the "distance" that her stuff could apply .... SNA got everything over "X" thousand feet ... and "local" peer-to-peer could only be applied to things under "X" thousand feet. The battle then was for the value of "X" (even an attempt to fraction of X).

Other instance of SNA turf-war was the announcement of APPN ... which had an actual network layer. The SNA group non-concurred that APPN could even be announced or shipped. After several weeks of executive positioning ... APPN was finally announced ... but very careful wording so that it would have no implication that it was in any way associated with SNA (contamination of mainframe products of either peer-to-peer or actual networking).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

"Fair Share" scheduling

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Fair Share" scheduling
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 13 Feb 2002 20:07:57 GMT
JMckown@HEALTHAXIS.COM (McKown, John) writes:
Well, I know this is impossible. But we are beginning to have "fights" over running jobs. (luckily we get a 4th processor in 2 weeks, hope that helps). But what is wanted is a "Fair Share" scheduling by everybody. There are two parts. If a person has submitted 4 jobs (say all compiles), and one of them is running, and a second person submits a job (also a compile), the second person feels that it would be "fair" if the system would run his job before running the first person's second job. Another would be if person #1 has two job running while person #2 only has one, person #2 thinks it would be "fair" if his single job got as much "resource" as person #1's two jobs

when I did fairshare in the late '60s (and ibm re-released in the resource manager in the mid-70s) ... the actual implementation was goal oriented scheduling ... where fairshare was just a possible policy (tended to default if no other policy was specified)

the big difference in the "fair share" work ... wasn't so much the specific policy but being able to implement consistent goal across all environments and loads (most scheduling prior to this only looked at a small subset of resource issues ... and didn't mythodically go thru all parts of the system fixing all possible code decisions that had possible impact on the way resources were allocation ... and make it consistent). The other aspect was to do it in what appeared to be almost zero pathlength.

Shortly after the (re)release of the resource manager in the mid-70s I did a version expanding to groups & collections (with respect to above) and tried to get some number of data centers interested. The resistance that I ran into was the data processing manager then would have to get directly involved in resource policy arbritration between groups of their customers ... something that they wanted to avoid. Having things slightly more open & free avoided putting the data processing manager on the spot with regard to strict resource policy arbritation with regard to conflicting interests in their user communities (somewhat analogous to some of the insurance policies with regard to "acts of god" ... if i can't control it ... i can't be blamed).

random ref:
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Wed, 13 Feb 2002 20:29:22 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Yeah, but the BIG difference Lynn, was the chronology (the time). In 1969 your employer turned those guys down saying that the ARPAnet would not work.

Similarly, I was somewhat heartened to see Bitnet break away from the monolith view of SNA. On the other hand, their attempts at revisionism are laughable were it not that so many people believe them.

vnet didn't have network node limitation ... and it effectively had a flavor of "gateway" support for the beginning ... allowing HASP/JES nodes to be attached on the boundary thru a gateway.

I think the most interest email demo I had in San Jose (before Almaden) was watching the ACK recepts of an email to Zurich beliefing being queued in NY before hopping the Atlantic.


possible dup ... post done after this has already shown up but this one appears to have been lost. as an aside, in '69, I was still undergraduate.

...

also in terms of resources, the arpanet/ncp/imp operation had significant implementation and deployment resources

by comparison, vnet used standard (mostly bisync) line-scanners (not all the sna gorp) and majority done by a single person.

the ncp/imp used all 56kbit links ... almost all of the vnet internal network links were 9.6kbit (except for stuff that operated between machines within the same datacenter). I had been told by a number of people circa '78 that a significant (over half?) of the 56kbit ncp/imp traffic was starting to be the traffic & load balancing tables being exchanged among all the IMPs. I never saw any actual data ... but it wasn't uncommon to find resource management algorithms in the '60s & '70s scaling very poorly (aka theoritical full-mesh with every node exchanging traffic & load balancing data with every other node, so management traffic quickly grew geometrically).

vnet had a couple issues with the 9.6kbit links. the standard mainframe i/o interface is purely half-duplex ... either writing or reading ... but not both. eventually T.G. in rochester came with a "y-cable" and modified a line-driver to be dual-simplex ... aka a pair of channel addresses (I/O ports) were used for each link, one dedicated to reading & one dedicated to writing (to achieve full-duplex emulation).

another issue with the internal network was corporate standard that required all transmission leaving the boundaries of a facility/site to be encrypted. not only was the internal network larger than arpranet ... but I was told that it may have had well over half of the world's link encryptors installed.

random NCP/IMP & NCP/3705 references (NCP in 3705 terms was misnomer since there wasn't an actual "network" layer ... just large scale, centralized telecommunication management):
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000.html#50 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#67 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#72 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#74 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#4 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#78 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#48 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#51 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001b.html#49 PC Keyboard Relics
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#81 36-bit MIME types, PDP-10 FTP
https://www.garlic.com/~lynn/2001d.html#38 Flash and Content address memory
https://www.garlic.com/~lynn/2001e.html#8 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#53 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001k.html#23 more old RFCs
https://www.garlic.com/~lynn/2001k.html#42 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol
https://www.garlic.com/~lynn/2001l.html#23 mainframe question
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#2 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002.html#48 Microcode?

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Wed, 13 Feb 2002 20:30:28 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Yeah, but the BIG difference Lynn, was the chronology (the time). In 1969 your employer turned those guys down saying that the ARPAnet would not work.

Similarly, I was somewhat heartened to see Bitnet break away from the monolith view of SNA. On the other hand, their attempts at revisionism are laughable were it not that so many people believe them.

vnet didn't have network node limitation ... and it effectively had a flavor of "gateway" support for the beginning ... allowing HASP/JES nodes to be attached on the boundary thru a gateway.

I think the most interest email demo I had in San Jose (before Almaden) was watching the ACK recepts of an email to Zurich beliefing being queued in NY before hopping the Atlantic.


also the internal network tended towards a much more of a usenet store&forward dial-up model. The arpanet could dictate a certain level of service and availability ... while the internal network was much more laissez faire during the '70s with high percentage of dial-up links and per-minute charges.

somebody's analogy description of the arpanet in the mid-70s was a postal system where somebody wanted to send a piece of mail from nyc to some place like tokyo and it had to pass thru at least one postal center in every timezone between nyc and tokyo ... and before the envelope was accepted at the post office in nyc ... they had to check first to verify that the lobby of every postoffice in every time zone between nyc and tokyo was actually open and operating at that particular moment (or otherwise the piece of mail couldn't be accepted ... aka end-to-end realtime operation).

much of arpanet design point was that some end-to-end connection was up and operational at least part of the time ... the internal network design point was much more analogous to usenet assuming that there was a low probability of having actual total end-to-end operation a significant percentage of the time.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

ibm vnet : Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ibm vnet : Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Thu, 14 Feb 2002 03:42:34 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
A site had to be willing to commit $100K annually. That was and still is not small amount fo change.

i'm just talking about the number of people attending arpanet meetings, designing arpanet, writing arpanet code, designing IMPs, building IMPs, writing IMP code, designing host/IMP interface, writing host/imp code, etc (not the resources that an installation needed to install one, the cost of the 56kbit dedicated links, etc).

That was significantly more in total than the one person that did the design, code, etc for vnet (including effectively gateway support in each node from the beginning ... which arpanet didn't see to the cut-over to IP on 1/1/83) ... using standard installed hardware.

the corporate gurus claimed that a fully distributed, network of the type of either internal network (or arpanet) couldn't be done (architected, designed, coded, tested, etc) in the resources available (to say nothing of being essentially all done by a single person).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Computer Naming Conventions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Naming Conventions
Newsgroups: alt.folklore.computers
Date: Thu, 14 Feb 2002 17:42:11 GMT
jcmorris@mitre.org (Joe Morris) writes:
I wuzzn't there (I was a Big Blue customer despite a couple of attempts to persuade me to go to work for IBM) -- but it seemed from the outside that one of the major advantages enjoyed by VNET was that it did not have upper management buy-in...meaning that design decisions were made by people who knew what they were doing.

one of the most difficult people types are the one that think they know what they are doing.

the lack of executive buy-in met that not only was all the development done on shoe-string (limiting group to very small number of people was good thing) ... but also other budget items ... specifical line costs .. which tended then towards 9600 and dial-up. The dial-up orientation then further biased the design & implementation towards more of a usenet-type operation ... because of low probability that there would be actual end-to-end connectivity at any particular moment thru multi-hop network.

also remember it was also done at CSC ... responsible for CP/67, VM/370, GML, online computing, editors, etc ... also which had very low executive "buy-in".

it was also KISS ... looking at pu4/pu5 in the mid-70s ... it was at the opposite end from KISS (again some conjecture that it might have had something to do with the PCM work we did when I was undergraduate) ... there was some analysis of all the interlocking interdependencies of a single customer that had a remote banking-like terminal on a remote shop floor who wanted to upgrade from leased-line SNA to dialup SNA.

1) it required new microcode in the terminal 2) that required a new NCP release/gen in the 3705 3) which required a new VTAM release/gen in the mainframe 4) which required new MVS release/gen in the mainframe

... all of the above were all interdependencies and had to be coordinated in a single change (as they discovered over an extended period of time). The other problem was that because new releases were being introduced there was an extensive change-control process at each point. New releases tended to introduce new bugs ... which had to be individual resolved ... frequently in a process that involved dropping back to the previous release as part of a lengthy resolution process.

The above, coordinated release/gen transistion could take 2-3 days to deploy and another 2-3 days to back-out ... and associated resolution process could take a number of weeks. The switch from leased line to dial-up took something like a year elapsed time ... and possibly thousands of person hours.

The internal JES2 networking node management had similar problems. Changing the device/node table in JES2 required a "gen" which came under the change-control groups' responsibility and was treated as part of the general new JES2 gen ... carefully controlled and scheduled possibly 4-10/year. Because of the lengthy interval between changes ... multiple changes (accumulated &) were scheduled for each transition ... which then aggravated the possibility that there would be a transition problem ... further motivating the change control group to rigidly control the process and minimize the number such events per year (negative feedback loop, fewer changes per year, more changes done in single batch, increased probability that there would be something that failed).

during a period in the '70s when there might be 700-800 nodes in the internal network ... and a typical jes2 node could only define 170-200 such nodes the local jes2 people tend to create node definition priority list for their user community ... complex business process where users escalated requirements for replacing some existing node definition in the local JES2 table with something they felt that was higher priority. This would perculate along for a couple months, all changes "frozen" for testing period of a couple weeks, the big change happen ... and then it starts all over again.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))
Newsgroups: alt.folklore.computers
Date: Fri, 15 Feb 2002 17:44:02 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
SMP just means that the hardware and operating system are symmetric. It's entirely reasonable to want to control the distribution of tasks. For instance, in an SMP system you might want to reserve more processing power for engineering jobs over accounting jobs, or vice versa, so you could control which queues the jobs were allowed to enter based on account. Historically this was often done on SMP systems (and on AMP systems). Also, if you added a new CPU to the system, you might want to only run diagnostic processes on it until you were satisfied that it was reliable.

we had some number of situations in the 70s with 370 158s & 168s where system thruput was much higher in asymmetric mode than symmetric mode. the problem was that asynchronous interrupts did terrible things to cache hit ratios. a 158 with both processors symmetrically doing all work had processor running at about .9mips. a two processor configuration with one not doing any i/o and the other processor doing i/o plus normal work ... had the "I/O plus normal work" processor running at .9mips ... and the other "no I/O, no interrupts" processor running around 1.4 mips.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Filesystem namespaces (was Re: Serving non-MS-word .doc files (wasRe: PDP-10 Archive migrationplan))
Newsgroups: alt.folklore.computers
Date: Fri, 15 Feb 2002 19:46:15 GMT
"GerardS" writes:
Yes, there is a reason. Cache. Caches (on some computers) are specific to the engine they are attached to. V(IVM's) VM at one point made it a point to try to (re) dispatch a user on the same engine (before an interrupt occurred). When you are dispatching many hundreds if not thousands of users per second, it really makes a performance impact. ------------------------------------ Gerard S.

note: not just processor/cache affinity for application code (and associated cache hit ratios) but also kernel code processor/cache affinity.

the original multiprocessor code that I did for VM for 158 & 168 sort of had this natural affinity ... high probability that applications would tend to stay on same processor and "keep" its cache lines. A couple releases later ... they wanted explicit code for this and put in a bunch of cross-processor coordination ... the immediate effect was to increase total processor cpu utilization by the kernel by an additional 15% (with 100% processor busies ... the after effect was that of total processor time, an additional 15% showed up being used by kernel ... and as a result there was 15% less processor going to application).

now the percent busy aka ... say 40% kernel & 60% application to 55% kernel and 45% application ... is different from previous posting about achieving higher cache hit ratio and corresponding higher system thruput because more instructions were executing in a given amount of time.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

TOPS-10 logins (Was Re: HP-2000F - want to know more about it)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 16 Feb 2002 18:01:08 GMT
jmfbahciv writes:
Other crashes usually meant that monitor had no idea how it got here from whereever and that it was "safer" to just stop than continue going. Those were system stopcodes; "we're in a mess; let's not do any more damage than we may have done".

Sheesh! Don't those other OSes have a mechanism that does a graceful crash?


CP/67 normally wrote core image associated with the cause of the crash to disk, tidied up a bunch of stuff and then did an automatic reboot.

Tom Van Vleck
http://www.multicians.org/thvv/

has a story of CP/67 crashing nearly 30 some times in a single day (at MIT urban systems lab, I believe in 575? .. across the courtyard from 545 tech sq that housed cambridge science center and also multics) ... taking less than a couple minutes to crash and recover in each case ... compared to multics at the time taking an hour or two to recover.

the "user" filesystems was responsibility of CMS ... over which CP/67/monitor had no direct control/interaction. In part CMS used a process of careful replace with disk never in inconsistent state since a CP/67 crash essentially would instantly vaporize the virtual machines. Original CMS filesystem always wrote new/changed metadata to new disk locations and then did a rewrite of record 4 to indicate the old version or new version of metadata. In mid'70s the "new" cms filesystem updated that to ping/pong back & forth between writing record 4/5 with a latest version indicator in the record. On restart, both record 4/5 were read and the one with the newest version indicator was taken. This was to handle a failure mode where if there was a total power failure just at the moment the MFD record was being rewritten ... it could leave things in an inconsistent state.

in any case, the CP/67 crashing problem was a combination of local (imit urban lab) CP/67 modification and the tty/ascii support that I had done for cp/67 as undergraduate (remember back then large body of software was all open systems & fully distristrubed source was the norm ... it wasn't until sometime in the '70s that started to change). In any case, I had done some short-cuts in the code that calculated the length of an incoming message ... assuming that it would never exceed a max that could be calculated using a single byte (i.e. <255). I think the MIT urban lab had gotten some ascii device at Harvard that had max length of something like 400 ... they changed the maximum length specification for the device ... but didn't update my code that calculated the actual length. So they put up the modified system, and every time a long line came in there was a buffer overlay someplace and the system crashed.

The "comparison" of the couple minute recovery of cp/67 to the hour or more recovery for Multics ... supposedly was one of the things that prompted the filesystem rewrite for Multics.

random refs:
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Filesystem namespaces (was Re: Serving non-MS-word .doc files (was Re: PDP-10 Archive migrationplan))

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Filesystem namespaces (was Re: Serving non-MS-word .doc files (was Re: PDP-10 Archive migrationplan))
Newsgroups: alt.folklore.computers
Date: Sun, 17 Feb 2002 05:56:41 GMT
Brian Inglis writes:
That allows you 8.75 hours of unscheduled downtime a year: that's pretty unreliable, but then, hardware can be unpredictable at times. Some systems can't have that much scheduled downtime per year! That's enough time for about 17 bounces per year, averaging 30 minutes each from shutdown to available. Should be plenty for all but experimental or OS development/test systems!

the 1-800 lookup needed five nines available ... both scheduled and unscheduled. it had been using fault tolerant ... but even any sort of standard system maint. required taking down system and building new ... single such event even every couple years blew the outage budget.

we proposed a traditional cluster to handle the servicing issue. It wasn't really a problem because the SS7 was fully fault tolerant and had a pair of T1s out the back-side ... and if the 1-800 lookup didn't come back on one of the T1s ... it would redrive the request on the other T1. Cluster was relatively trivial ... since the fault masking for T1 communication failure would automatically handle cluster operation and single node failure masking also.

the counter proposal was a cluster of fault tolerant boxes ... but that turned out to be unnecessary expense ... the fault tolerance in the SS7 and the failure masking logic already in place for T1 failure modes ... with non-fault tolerant (but high availability) clusters met the five nines objective. cluster of fault tolerant clusters significantly increased the expense over solution that already met requirements (aka redundancy is sufficient if there is at least some fault-tolerant component in the configuration capable of masking individual component failure mode).

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

... the need for a Museum of Computer Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ... the need for a Museum of Computer Software
Newsgroups: alt.folklore.computers
Date: Fri, 15 Feb 2002 18:24:16 GMT
Anne & Lynn Wheeler writes:
When CSC was trying to get compare&swap into the 370 architecture (primarily charlie ... since compare&swap was mnemonic chosen for

offline question regarding csc ... it was the cambridge science center ... around 30-35 people on about 2/3rds of 4th floor, 545 tech. sq, cambridge (multics had a number of floors in the same bldg).

it had been hoping to be the center of multics activity if ibm had won the bid. not getting the multics bid ... POK area part of the corporation went off with tss/360 and 360/67 virtual memory in the early to mid-60s. while that was in progress CSC modified a 360/40 (they had wanted to get a 360/50 ... but all the available machines were going to FAA) to support virtual memory and developed cp/40.

Later when 360/67 machines became available with virtual memory (late '66?), CP/40 was ported to 360/67 and became cp/67 with virtual memory support, misc. recent postings
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#46 ... the need for a Museum of Computer Software

as mentioned CSC was also responsible for compare&swap, GML, various online applications, the internal network, etc. It had also done some of the early work on the transition from performance tuning to capacity planning. CSC had done the port of apl\360 to cms\apl and opened up the workspace size restriction ... and done quite a bit of work adopting the APL memory management algorithms for virtual memory environment.

CSC had done a lot of performance tuning along with performance modeling work written in APL (leading to a lot of the capacity planning stuff). The APL performance predictor was deployed on the field/sales HONE system (originally cp/67 built by CSC for all field/sales forces around the world ... eventually upgraded to vm/370, i got some of my first overseas trips installing HONE outside the US). The performance predictor allowed sales people to characterize the performance operation and workload of a customer installation and then ask "what-if" questions regarding adding bigger cpu, more disk, larger real memory, etc. A lot of the early CSC performance work was the basis for the transition from performance tuning to capacity planning.

https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

next, previous, index - home