List of Archived Posts

2004 Newsgroup Postings (09/8 - 10/04)

Xah Lee's Unixism
Xah Lee's Unixism
IBM 3090 : Was (and fek that) : Re: new computer kits
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
x9.99 privacy impact assessemnt (PIA) standard
Vintage computers are better than modern crap !
Complex Instructions
I am an ageing techy, expert on everything. Let me explain the
Xah Lee's Unixism
IBM 7094 Emulator now runs Fortran compiler
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
IBM 3090 : Was (and fek that) : Re: new computer kits
FW: Looking for Disk Calc program/Exec
FW: Looking for Disk Calc program/Exec (long)
Is the solution FBA was Re: FW: Looking for Disk Calc
"Perfect" or "Provable" security both crypto and non-crypto?
Is the solution FBA was Re: FW: Looking for Disk Calc
Is the solution FBA was Re: FW: Looking for Disk Calc
Is the solution FBA was Re: FW: Looking for Disk Calc
md5 algorithm
CTSS source online
Shipwrecks
Shipwrecks
FW: Looking for Disk Calc program/Exec
I am an ageing techy, expert on everything. Let me explain the
Shipwrecks
Shipwrecks
I am an ageing techy, expert on everything. Let me explain
I am an ageing techy, expert on everything. Let me explain the
I am an ageing techy, expert on everything. Let me explain
FW: Looking for Disk Calc program/Exec
I am an ageing techy, expert on everything. Let me explain
Actuarial facts
I am an ageing techy, expert on everything. Let me explain
"Perfect" or "Provable" security both crypto and non-crypto?
"Perfect" or "Provable" security both crypto and non-crypto?
Acient FAA computers???
Actuarial facts
Shipwrecks
"Perfect" or "Provable" security both crypto and non-crypto?
Shipwrecks
Shipwrecks
Shipwrecks
"Perfect" or "Provable" security both crypto and non-crypto?
Acient FAA computers???
Specifying all biz rules in relational data
Specifying all biz rules in relational data
Specifying all biz rules in relational data
No visible activity
Access to AMD 64 bit developer centre
project athena & compare and swap
Lock-free algorithms
Specifying all biz rules in relational data
Lock-free algorithms
Losing colonies
Shipwrecks
Some Laws
Actuarial facts
Detergent
computer industry scenairo before the invention of the PC?
Lock-free algorithms
Lock-free algorithms
Lock-free algorithms
IBMism
computer industry scenairo before the invention of the PC?
a history question
Specifying all biz rules in relational data
Specifying all biz rules in relational data
Specifying all biz rules in relational data
NULL
Actuarial facts
Tera

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Wed, 08 Sep 2004 19:01:39 -0600
"John Thingstad" writes:
Internet was discovered long before this. (In 1965 a research project, by the Rand cooperation, for a network that could survive a nuclear attack. Sponsored by DARPA. These is the real creators of the Internet technology. Not Unix hackers.) It was the realization of www (CERN) that spawned the movement toward the Internet.

So the year in question is about 1987.


packet networking was "discovered" in the 60s(?) ... but it was homogeneous networking with pretty much homogeneous infrastructure implementation.

the great switch-over to internetworking protocol was 1/1/83.

i've frequently asserted that one of the reasons that the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the arpanet from just about the beginning until sometime mid-85 ... was because the internal network nodes effectively had a form of gateway functionality ... which showed up in the internetworking protocol switchover on 1/1/83.

packet switching technology for the (homogeneous) arpanet is somewhat orthogonal to internetworking protocol technology .... which was deployed in the great switchover on 1/1/83.

some minor other references:
http://www.garlic.com/~lynn/internet.htm

CERN and SLAC were sister sites, did some amount of common tool development, used common infrastructures and were big GML users .... which had been done at the science center circa 1970
http://www.garlic.com/~lynn/subtopic.html#545tech

which morphed into SGML and then html, xml, etc. SLAC had the first web server outside of europe .... running on vm/cms system
http://www.slac.stanford.edu/history/earlyweb/history.shtml

the distinction of internetworking protocol isn't packet switching ... it is gateways and interoperability of lots of different kinds of networking.

OSI can support x.25 packet switching and/or even the arpanet packet switching from the 60s & 70s .... but it precludes internetworking protocol. internetworking protocol (aka internet for short) is a (non-existant) layer in an OSI protocol stack between layer3/networking and layer4/transport. misc. osi (& other) comments
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

the switch-over to internetworking protocol on 1/1/83 somewhat also coincided with the expanding role of csnet activity ... and more & more NSF involvement .... compared to the extensive earlier arpa/darpa involvement; aka csnet ... and then nsfnet1 backbone rfp and then nsfnet2 enhanced backbone rfp.

misc. internet and nsfnet related history pointers:
http://www.garlic.com/~lynn/rfcietf.htm#history

the proliferation of the internetworking protocol and use in the commercial sector was also happening during the 80s .... which you could start to see by (1988) at the interop '88 show. misc. interop '88 references:
http://www.garlic.com/~lynn/subnetwork.html#interop88

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Wed, 08 Sep 2004 19:22:56 -0600
Morten Reistad writes:
Since I am on a roll with timelines; just one off the top of my head :

Project start : 1964
First link : 1969
Transatlantic : 1972 (to Britain and Norway)
Congested : 1976
TCP/IP : 1983 (the effort started 1979) (sort of a 2.0 version)
First ISP : 1983 (uunet, EUnet followed next year)
Nework Separation : 1983 (milnet broke out)
Large-scale design: 1987 (NSFnet, but still only T3/T1's)
Fully commercial : 1991 (WIth the "CIX War")
Web launced : 1992
Web got momentum : 1994
Dotcom bubble : 1999 (but it provided enough bandwith for the first time)
Dotcom burst : 2001


nsfnet1 backbone RFP
http://www.garlic.com/~lynn/2002k.html#12

misc. reference to award announcement
http://www.garlic.com/~lynn/2000e.html#10

was for backbone between regional locations ... it was suppose to be T1 links. What was installed was IDNX boxes that supported point-to-point T1 links between sites ... and multiplexed 440kbit links supported by racks & racks of PC/RTs with 440kbit boards ... at the backbone centers.

the t3 upgrades came with the nsfnet2 backbone RFP

my wife and i somewhat got to be the red team design for both nsfnet1 and nsfnet2 RFPs.

note that there was commercial internetworking protocol use long before 1991 ... in part evidence the heavy commercial turn-out at interop '88
http://www.garlic.com/~lynn/subnetwork.html#interop88

the issue leading up to the cix war was somewhat whether commercial traffic could be carried over the nsf funded backbone .... the internetworking protocol enabling the interconnection and heterogenous interoperability of large numbers of different "internet" networks.

part of the issue was that increasing commercial use was starting to bring down the costs (volume use) .... so that a purely nsfnet operation was becomming less and less economically justified (the cost for a nsfnet only operation was more costly and less service than what was starting to show up in the commercial side).

part of the issue was that there was significant dark fiber in the ground by the early 80s and the telcos were faced with a significant dilemma .... if they dropped the bandwidth price by a factor of 20 and/or offerred up 20 times the bandwidth at the same cost .... it would be years before the applications were availability to drive the bandwdith revenue to the point where they were taking in sufficient funds to cover their fixed operating costs. so some of the things you saw happening were controlled bandwidth donations (in excess of what might be found covered by gov. RFPs) to educational institutions by large commercial institutions .... for strictly non-commercial use

Such enourmous increases in bandwidth availability in a controlled manner for the educational market would hopefully promote the development of bandwidth hungry applications. They (supposedly) got tax-deduction for their educational-only donations .... and it wouldn't be made available for the commercial paying customers (i.e. so as to not 1. violate tax-deduction status of the donation and not 2. shift commercial traffic to donated bandwidth).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

IBM 3090 : Was (and fek that) : Re: new computer kits

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3090 : Was (and fek that) : Re: new computer kits
Newsgroups: alt.folklore.computers
Date: Wed, 08 Sep 2004 19:45:42 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Speaking of that, I found that there aren't any pictures of the IBM 360/85 on the Web.

Since the 91 and 95 looked just like the 195, I suspect that the 85 might look like a 75, but I don't know for sure.


i don't remember seeing a 85 .... but a 75 was supposedly a hardwired 65 (i.e. 65 was standard microcoded 360 .... 360/75 had the same memory infrastructure as 360/65 ... but instead of microcode the same operations were hardwired).

supposedly the 360/85 was precursor to the 370/155 ....

in any case the ibm history site has some number of pictures.

3090 picture
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH3090.html

following discussion about 360
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_intro2.html
mentions memory from 8k to 512k .... which would be true for up to 360/50. however, 360/65, 360/67, & 360/75 could have 1mbyte standard memory. the two processor 360/65 & two processor 360/67 could have two megabytes of standard memory in single configuration

this is 370/145 picture
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_intro3.html
the left side of the picture could be 2305 paging drum ... so it is likely a vm/cms installation ... since few other systems that ran on 370/145 did enuf paging to justify a 2305.

mainframes product profiles page
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_profiles.html

this is a 360/75 page from the above reference page
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2075.html
and it doesn't look like a 65/67

the front of a 360/65
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH2065C.html

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Wed, 08 Sep 2004 20:08:30 -0600
Morten Reistad writes:
Since I am on a roll with timelines; just one off the top of my head :

Project start : 1964 First link : 1969 Transatlantic : 1972 (to Britain and Norway) Congested : 1976 TCP/IP : 1983 (the effort started 1979) (sort of a 2.0 version) First ISP : 1983 (uunet, EUnet followed next year) Nework Separation : 1983 (milnet broke out) Large-scale design: 1987 (NSFnet, but still only T3/T1's) Fully commercial : 1991 (WIth the "CIX War") Web launced : 1992 Web got momentum : 1994 Dotcom bubble : 1999 (but it provided enough bandwith for the first time) Dotcom burst : 2001


oh, and here is a recent reference to some bitnet activity:
http://www.garlic.com/~lynn/2004k.html#66
in the listserv history section

some general bitnet/earn posts:
http://www.garlic.com/~lynn/subnetwork.html#bitnet

more than 20 year old email reference about earn
http://www.garlic.com/~lynn/2001h.html#65

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 09 Sep 2004 07:27:23 -0600
Reynir Stefánsson writes:
Wasn't the idea behind ISO/OSI that there should be One Network for everybody, instead of today's lot of interconnected nets?

interconnection and interoperability happen at both a protocol level and a operational level .... being able to have both independence and interoperability offers huge amount of advantages.

i don't know what the original idea was .... however, my impression of looking at what it became .... was that it sprang up from telco point-to-point copper wire orientation. iso/osi even precludes LANs.

the work on high speed protocol ... which would go directly from level4/transport layer to LAN/MAC interface ... was precluded in ISO standards organizations because it didn't conform to OSI model for two reasons

1) it skipped the OSI level4/level3 transport/network interface and was therefor precluded in ISO standards bodies

2) it went directly to the LAN/MAC interface .... LAN/MAC interface is not allowed for in the OSI model ... so therefor intefacing to LAN/MAC interface would be violation of OSI model

... the sort of third reason was that it would also incorporate internetworking layer within its functionality .... also a violation of the OSI model.

misc. past comments
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 09 Sep 2004 07:42:20 -0600
Morten Reistad writes:
It was an upgrade from 56k. The first versions of NSFnet was not really scalable either; noone knew quite how to design a erally scalable network, so that came as we went.

we had a project that i called HSDT
http://www.garlic.com/~lynn/subnetwork.html#hsdt

for high-speed data transport ... to differentiate from a lot of stuff at the time that was communication oriented ... and had real T1 (in some cases clear-channel T1 w/o the 193rd bit) and higher speed connections. It had an operational backbone ... and we weren't allowed to directly bid NSFNET1 .... although my wife went to the director of NSF and got a technical audit. The technical audit summary said something to the effect that what we had running was at least five years ahead of all NSFNET1 bid submissions to build something new.

one of the other nagging issues was that all links on the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

had to be encrypted. at the time, not only were there not a whole lot of boxes that supported full T1 and higher speed links ... but there also weren't a whole lot of boxes that support full T1 and higher speed encryption.

a joke a like to tell ... which occured possibly two years before the NSFNET1 RFP announcement ... was about a posting defining "high-speed" .... earlier tellings:
http://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
http://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
http://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
http://www.garlic.com/~lynn/2003m.html#59 SR 15,15
http://www.garlic.com/~lynn/2004g.html#12 network history

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 09 Sep 2004 09:02:33 -0600
Morten Reistad writes:
But with a PM you had to do a cold start. All the disks had to be spun down, filters changed, and they had to spin for an ungodly long time after the filter change before heads could be enabled again. This was to bring all the dust that was let loose in the process into the new filters before heads went to fly over the platters again.

Also power supplies had to be checked for the dreaded capacitor problems. Tape drives also had these. This was industry-wide problems; and news from a few burned UPS'es the last couple of months tell me that the capacitor problems are still with us.

It was a real accomplishment when we in 1988 could do a full PM (Prime gear) without shutting down the system. All disks were mirrored, and all power duplicated, so we shut down half of the hardware and did PM on that; and took the other half next week.

SMD filters were used at a quite high rate; even inside well filtered rooms. ISTR 6 months was a pretty long interval between PM's.


360s, 370s, etc differentiated between smp ... which was either symmetrical multiprocessing or shared memory (multi-)processing ... and loosely-coupled multiprocessing (clusters).
http://www.garlic.com/~lynn/subtopic.html#smp

in the 70s, my wife did stint in POK responsible for loosely-coupled multiprocessing architecture and came up with peer-coupled shared data
http://www.garlic.com/~lynn/submain.html#shareddata

also in the 70s, i had done a re-org of the virtual memory infrastructure for vm/cms. part of it was released as something called discontiguous shared memory ... and other pieces of it was released as part of the resource manager having to do with page migration (moving virtual pages between different backing store devices).
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock
http://www.garlic.com/~lynn/submain.html#mmap
http://www.garlic.com/~lynn/submain.html#adcon

in the mid-70s, one of the vm/cms timesharing service bureaus
http://www.garlic.com/~lynn/submain.html#timeshare

was starting to offer 7x24 service to customers around the world; one of the issues was being able to still schedule PM .... when there was never a time that there wasn't anybody using the system. they had already providing support for loosely-coupled, similar to HONE
http://www.garlic.com/~lynn/subtopic.html#hone

for scallability & load balancing. what they did in the mid-70s was to expand the "page migration" ... to include all control blocks ... so that processes could be migrated off one processor complex (in a loosely-coupled environment) to a different processor complex ... so a processor complex could be taken offline for PM.

in the late '80s, we started the high availability, cluster multiprocessing project:
http://www.garlic.com/~lynn/subtopic.html#hacmp

of course the airline res system had been doing similar things on 360s starting in the 60s.

totally random references to airline res systems, tpf, acp, and/or pars:
http://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
http://www.garlic.com/~lynn/99.html#17 Old Computers
http://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
http://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
http://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
http://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
http://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
http://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
http://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
http://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
http://www.garlic.com/~lynn/2001n.html#0 TSS/360
http://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
http://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
http://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2002i.html#83 HONE
http://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
http://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
http://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
http://www.garlic.com/~lynn/2002o.html#28 TPF
http://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
http://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
http://www.garlic.com/~lynn/2003c.html#30 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#67 unix
http://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
http://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
http://www.garlic.com/~lynn/2003g.html#37 Lisp Machines
http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
http://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
http://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
http://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#14 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 09 Sep 2004 10:29:16 -0600
Morten Reistad writes:
In 1987 T1's(or E1's in this end of the pond) were pretty normal; T3's was state of the art. But it is not very difficult to design interfaces that shift the data into memory; and 1987'is cumputers could handle a few hundred megabit worth of data pipe without too much trouble; but you needed direct DMA access, not some of the then standard busses or channels.

IBM always designed stellar hardware for such things; what was normally needed was the software. To see what Cisco got away with regarding lousy hardware (GS-series) is astonishing.

There was a large job to be done to handle routing and network management issues. BGP4 didn't come out until 1994, nor did a decent OSPF or SNMP.


even in mid-80s .... t1/e1 ... the only (ibm) support was the really old 2701 and the special zirpel card in the Series/1 that had been done for FSD.

in fall 1986, there was a technology project out of la gaude that was looking at a T1 card for the 37xx ... however, the communication division wasn't really planning on T1 until at least 1991. They had done a customer survey. since ibm (mainframe) didn't have any T1 support ... they looked at customers that were using 37xx "fat pipe" support that allowed ganging of multiple 56kbit into single logical unit. they plotted the number of ganged 56kbit links that customers had installed .... 2-56kbit links, 3-56kbit links, 4-56kbit links, 5-56kbit links. However, they found no customers with more than five gnaged 56kbit links in a single fat-pipe. Based on that they weren't projecting any (mainframe) T1 useage before 1991.

what they didn't appear to realize was that the (us) tariffs at the time had cross-over where five or six 56kbit links were about the same price as a single T1. so what was happening ... customers that hit five or six 56kbit links ... were making transition directly to T1 and then using non-IBM hardware to drive the link (which didn't show up on the communication divisions 37xx high-speed communication survey). hsdt easily identified at least 200 customers with T1 operation (using non-ibm hardware support) at the time the communication division wasn't projecting any mainframe T1 support before 1991.

because of the lack of T1 support (other than the really old 2701 and the fairly expensive zirple-series/1 offering) ... was one of the reasons that the NSFNET1 response went with (essentially) a pbx multiplexor on the point-to-point telco T1 links ... with the actual computer links running 440kbits/cards with the pc/rt 440kbit/sec cards.

hsdt
http://www.garlic.com/~lynn/subnetwork.html#hsdt

had several full-blown T1 links since the early 80s ... and was working with a project for a full-blown ISA 16-bit T1 card ... with some neat crypto tricks.

I think it was supercomputing 1990 (or 1991?) in austin where they were demo'ing T3 links to offsite locations.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

x9.99 privacy impact assessemnt (PIA) standard

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: x9.99 privacy impact assessemnt (PIA) standard
Newsgroups: alt.privacy
Date: Thu, 09 Sep 2004 12:35:26 -0600
x9.99 is thru its public comment period in ansi and now standard ... we've been working on it for nearly the past two years.

as part of the work, i had started a privacy taxonomy and glossary ... some notes at
http://www.garlic.com/~lynn/index.html#glosnote

it is no longer listed in the public comment section of the ansi electronic store ... aka
http://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.99%3a2004

and we have a couple minor nits to take care of before it is put out in its final form.

main page for ansi electronic store
http://webstore.ansi.org/ansidocstore/default.asp?

x9 standards page:
http://www.x9.org/

x9.99 blurb

http://www.x9.org/whatsnew.shtml#insertc

there is a longer article in the spring 2004 x9 newsletter
http://www.x9.org/nwsltr/X9Standard0404.pdf

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Thu, 09 Sep 2004 12:41:13 -0600
Alan Balmer writes:
Portability or not, I'm a lot more productive in C than in assembly, and the same is true for everyone else I know who is fluent in both. In fact, for the HP-UX system I'm programming for now, I shudder at the idea of programming it in assembly.

when i was doing various vm kernel stuff in assembler at the science center (early '70s)
http://www.garlic.com/~lynn/subtopic.html#545tech

there were two performance modeling projects .... one i've mentioned before was analytical model in APL ... that grew into performance predictor marketing tool on hone (and the early foundation for capacity planning):
http://www.garlic.com/~lynn/subtopic.html#hone

and the other was an event driven model implementing in PLI.

both parties somewhat complained that i was implementing (in assembler) and deploying new production performance enhancements .... faster than they could implement the corresponding changes to the models.

misc. past refs:
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/submain.html#bench

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Complex Instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Complex Instructions
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Thu, 09 Sep 2004 13:58:19 -0600
Brian Inglis writes:
DEC machines and drives never achieved the thruput of IBM machines and drives, despite problems with the latter's hardware and firmware. One important application benchmark we ran at a PPOE took 24 hours on a PDP-11/70 or VAX-11/780 and 30 minutes on a comparable 4341. And IBM offered 24x7 ASAP maintenance contracts compared to DEC's we'll call you back within 4 hours during business hours, and we'll bill you extra outside those hours if we decide to respond.

4341 would have possibly done even better in the market ... but there were some issues with it being competitive with 3031 ... and/or groups of 4341 competitive with 3033 .... which led to some internal conflicts ... some past ref:
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

for some additional thread drift ... one of the more complex operations was luther's radix partition tree stuff
http://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
http://www.garlic.com/~lynn/2002d.html#18 Mainframers: Take back the light (spotlight, that is)

other past threads relating complex instructions
http://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
http://www.garlic.com/~lynn/2002c.html#37 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002d.html#0 VAX, M68K complex instructions (was Re: Did Intel Bite Off MoreThan It Can Chew?)
http://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
http://www.garlic.com/~lynn/2004c.html#22 More complex operations now a better choice?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain the

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the
Middle East
Newsgroups: alt.folklore.computers
Date: Thu, 09 Sep 2004 21:19:30 -0600
Larry Elmore <ljelmore_@_comcast_._net> writes:
I think perhaps they were more rigid culturally, which tended to undermine their officers' and NCOs' training. I think Americans really were more flexible and adaptable, but our training was lot more uneven. Even as late as December 1944, there were some instances of severe confusion and mass surrenders. The Russians and Japanese certainly learned a lot more slowly (as organizations) than did the Americans or Germans.

boyd's comments were that germans had much more professional soldiers with people they knew and understood their craft. one claimed result was that the german army was something like 3 percent officiers ... compared to something like 15 percent officiers for american to operate a much more top-down structured organization.

his observation that many of these young officers being taught top-down structured organization in WW2 ... were becoming the corporate executives of the 70s & 80s ... and mirroring the top-down, structured operation that they had been taught in their youth ... where decisions were made at a high a level as possible ... and not even necessarily by people who understand the related craft ... but by people who believed that controlling the organization was the primary objective.

going into the 90s ... you started to see some reversal of this trend where there were instances in large corporations of flattening massive middle management organizations (i.e. the equivalent of the large officer corp of ww2).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 10 Sep 2004 08:42:43 -0600
Morten Reistad writes:
smD the TLA that represents a washing-machine size disk. Mountable. ^ Made impressive head crashes from time to time.

But I won't interfere with this lovely thread drift with lots of relevant facts.


the first disks i played with at the univ. were 2311s on 360/30; they were individual, top-loading, with mountable disk packs; 2311 disk pack was a little over 7mbytes. didn't find picture of 2311 ... but this picture of 1311 were similar ... the lid of the unit was released and raised (something like auto engine hood)
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_1311.html

the next were 2314s that came with 360/67. it was long single unit with drive drawers that slid out. top & bottom row with 9 drives. drives had addressing plugs .... eight plus a spare. a 2314 pack could be mounted on the spare drive, spun up .... and then the addressing plug pop'ed from an active unit and put in the spare drive. it reduced the elapsed time that the system saw unavailable drive (time to power off a drive, open the drawer, remove a pack, place in new pack, close drawer, power up the drive). 2314 pack was about 29 mbytes. picture of 2314 cabinet
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_2314.html

the next were the 3330s ... long cabinet unit looked similar to 2314 ... but with only 8 drawers (instead of 9). 3330-i pack had 100mbytes ... later 3330-ii pack had 200mbytes. picutre of 3330 unit ... the three cloaded plastic units on top of the unit were used to remove disk pack and hold it.
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_PH3330.html

close up of 3330 disk pack in its storage case ... also has picture of 3850 tape cartridges
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_PH3850B.html

misc. other storage pictures:
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_photo.html

next big change was 3380 drives with totally enclosed, non-mountable cabinet.

old posting on various speeds and feeds
http://www.garlic.com/~lynn/95.html#8 3330 disk drives

and some more old performance data
http://www.garlic.com/~lynn/95.html#10 virtual memory

i had written a report that relative disk system performance had declined by a factor of ten times over a period of 10-15 years. the disk division assigned their performance group to refute the claim. they looked at it for a couple of months and concluded that i had somewhat understated the relative system performance decline ... that it was actually more. the issue was that other system components had increased in performance by 40-50 times ... while disks had only increased in performance by 4-5 times ... making relative disk system performance 1/10th what it had been. misc. past posts about the gpd performance group looking at the relative system performance issue:
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2002h.html#29 Computers in Science Fiction
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2002k.html#22 Vnet : Unbelievable
http://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
http://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
http://www.garlic.com/~lynn/2004e.html#16 Paging query - progress

it was possibly one of the things contributing to disk divisionproviding funding for the group up in berkeley ... misc. references
http://www.garlic.com/~lynn/2002e.html#4 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2004d.html#29 cheaper low quality drives

i use to wander around bldgs 14 & 15 and eventually worked on redoing kernel software for their use. misc. past posts about disk engineering and product test labs:
http://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

IBM 7094 Emulator now runs Fortran compiler

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 7094 Emulator now runs Fortran compiler
Newsgroups: alt.folklore.computers
Date: Fri, 10 Sep 2004 10:24:06 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Speaking of FORTRAN:

I was recalling an incident that happened about 25 years ago just the other day in a conversation.

A friend was noting that FORTRAN had some obscure features. What was the assigned GO TO for, given that the computed GO TO can do everything it can do, and much more?


dusty decks, software history, early fortran ... etc
http://www.mcjones.org/dustydecks/archives/category/software-history/

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 10 Sep 2004 15:10:08 -0600
Morten Reistad writes:
These are the IBM gear that most resemble SMB equipment. SMD's were the BUNCH answer to DEC's RP04/5/6 and IBM's 3330. Originally made by CDC; others also produced them. NCR and Fujitsu come to mind.

Originally existed as 80-megabyte, pretty light units (30 kg); later expanded to 160-megabyte. Then the real washing machines turned up; 300 mb (315 unformatted megabytes). Originally 4 on a chain, 15 mbit analogue readout (MFM ISTR; they never tried RLL).

These were a mainstay among the smaller mini vendors from approx 1974 to the advent of winchesters around a decade later. The earliest winchesters made exact hardware replicas of the SMD. Then the spec was expanded and became ESMD, but ESMD was never as robustly standardized. Sacrifices of goats, PHBs and undergraduates was needed to stabelize long ESMD chains.


some number of the senior disk engineers left in the late '60s and early '70s .... fueling the shugart, seagate, memorex, cdc, etc disk efforts. in fact, the excuse given (later half 70s) for dragging me into bldg. 14 disk engineering conference calls with the pok cpu&channel engineers was that so many of the senior disk engineers (that were familiar with the channel interface) had left.

random disk history URLs from around the web:
http://www.old-computers.com/history/detail.asp?n=51&t=2
http://www.computerhistory.org/events/lectures/shugart_09052002/shugart/
http://www.logicsmith.com/hdhistory.html
http://www.thetech.org/exhibits/online/revolution/shugart/i_a.html
http://www.disktrend.com/disk3.htm

search engine even turns up one of my posts that somebody appears to be shadowing at some other site:
http://public.planetmirror.com/pub/lynn/2002.html#17 of course the original
http://www.garlic.com/~lynn/2002.html#17

in the previous posting
http://www.garlic.com/~lynn/2004l.html#12
this reference
http://www.garlic.com/~lynn/95.html#8
also gave the speeds and feeds for 3350 (including 317mbyte capacity).

the 1970s washing machines were the 3340s & 3350s ... but the 3350s enclosed and not removable/mountable; 3340s .... which had removable/mountable packs .... included the head assemble & platters completely enclosed.

3340 (winchester) reference, picture includes removable assembly on top of drives ("3348 data module"):
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3340.html
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3340b.html

picture of row of 3350 drives is similar to that of 3340s ... except the 3350 packs weren't removable and had much larger capacity
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3350.html

postings reference product code names:
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill

3340-35 was code named Winchester and as per the IBM 3340 ULR began shipping to customers november, 1973.

we had a joke when the 3380s were introduced about filling them completely full. if you converted an installation with say 32 3350 drives .... to 16 3380s (sufficient to hold 32-3350 drives worth of data, 10gbytes) ... you could have worse performance ... while 3380s were faster than 3350s, there weren't twice as fast. the proposal was to have a special microcode load for the 3880 controller .... which would only support half of a 3380 disk drive. There were a number of customer people (mostly technies) at share which thought it would be a good idea ... and furthermore that ibm should price these half-sized 3380s higher than full-sized 3380s (to make the customer exectives feel like they were getting something special). They would be called "fast" 3380s (because avg. seek only involved half as many cylindes) and it was important that the limitation be built into the hardware and be priced higher. It was recognized that installations could create their own "fast" 3380s ... just by judicious allocation of data and no special microcode. However, it was pretty readily acknowledged that w/o the hardware enforced restrictions, that there were all sorts of people that populate datacenters that would be unable to control themselves and fully allocate each 3380.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 10 Sep 2004 18:00:48 -0600
somewhat thread drift between ssa disk storage
http://www.garlic.com/~lynn/95.html#13 SSA

ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

and electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 10 Sep 2004 22:26:20 -0600
"John W. Kennedy" writes:
There was also an OS/360 version, but it was never as popular, since A) OS/360 console operators are usually busy enough and B) IEBGENER wasn't all that hard to use.

And, yes, there was a similar early program called DEBE.


similar to the stand-alone, self-loading (bootable) DEBE was LLMPS ... lincoln labs multiprogramming system .... which was self-loading program with small multitasker and most of the feature/functions provided were similar to DEBE.

the folklore is that LLMPS was also used as the core scaffolding for MTS (michigan terminal system)

... misc. ref to LLMPS manual:
http://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)

random other refs to LLMPS
http://www.garlic.com/~lynn/93.html#15 unit record & other controllers
http://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
http://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
http://www.garlic.com/~lynn/2000.html#89 Ux's good points.
http://www.garlic.com/~lynn/2001m.html#55 TSS/360
http://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
http://www.garlic.com/~lynn/2001n.html#89 TSS/360
http://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
http://www.garlic.com/~lynn/2002n.html#64 PLX
http://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
http://www.garlic.com/~lynn/2004d.html#31 someone looking to donate IBM magazines and stuff

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

IBM 3090 : Was (and fek that) : Re: new computer kits

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3090 : Was (and fek that) : Re: new computer kits
Newsgroups: alt.folklore.computers
Date: Sat, 11 Sep 2004 09:50:36 -0600
Julian Thomas writes:
I think that the 75 had a different data flow, and used a different flavor of SLT modules. I belive it had a humongous console compared to the 65. It is true that the 65 was microcoded and the 75 hardwired.

the previously mentioned 360/75 page
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2075.html

mentions 750nsec memory, up to 4-way interleave, and up to 1mbyte.

it also mentions that it was a upgrade(?) to the previously announced 360/70.

i have recollections of 360/60, 360/62 and 360/70 announcements having been made with 1msec memory ... and the upgrade to 750nsec memory with up to 4-way interleave, resulted in the 360/65, 360/67, and 360/75 announcements.

the front panel in the 360/75 picture looks different than the referenced 65 picture
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH2065C.html

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

FW: Looking for Disk Calc program/Exec

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: Looking for Disk Calc program/Exec
Newsgroups: alt.folklore.computers
Date: Sat, 11 Sep 2004 14:12:55 -0600
x-post from bit.listserv.ibm-main

steve@ibm-main.lst (Steve Comstock) writes:
OK, if you're going to reminisce about the horrors, who remembers the 3950 Mass Storage System? My last project with IBM was writing customer education for that product.

3850 ... the los gatos lab had one for awhile.

it was virtual 3330 .... here is 2321 data cell and 3850 mss
http://www.science.uva.nl/faculteit/museum/remarkable.html

here is several pictures of 3850
http://www.columbia.edu/cu/computinghistory/mss.html

original emulated multiple 3330-1 (100mbyte) virtual disks on 3330 real disks. later they had support for emulating 3330 virtual disks on real 3350s (which wasn't too unusual ... there was also 3344 ... which was multiple emulated 3340s on a 3350 physical drive).

picture from ibm archives:
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3850.html
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3850b.html

from somewhat fading memory ... the virtual 3330s had two modes of staging .... full (100mbyte) pack .... and 6cylinder staging (even cyliner "faults") .... aka somewhat more analogous to paging .... but units of 6 3330 cylinders could be transferred to/from real 3330 and 3850 tape cartridge.

in vm there was virtualization issue with whether it was

1) managing 3330s .... in which cp kernel was suppose to handle cylinder faults & staging w/o passing them to the virtual machine ...

2) managing 3850s ... in which case the cp kernel needed to pass the cylinder faults up to the virtual machine ... and let the virtual machine talk directly to the 3850 controller for staging.

various mss stories from vmshare archives
http://vm.marist.edu/~vmshare/browse?fn=MSS&ft=MEMO#77

i think cornell univ. did an extension to vm that would automagically (pre-)stage cms minidisk(s) when user logged on ... computing at cornell 1970 to 1979
http://dspace.library.cornell.edu/retrieve/141/Chapter_4

and for some topic drift ... the above mentions NBER and TROLL system for econometric modeling .... which i have some recollection as running on the "other" 360/67 cp/cms in tech sq (in the tech sq bldg. across the courtyard ... had harvard trust on the 1st floor).

so a little search engine use turns up
http://hopl.murdoch.edu.au/showlanguage.prx?exp=630&language=TROLL

which does say NBER was at 575 tech sq ... and TROLL
Time-shared Reactive On-Line Laboratory

Array language for continuous simulation, econometric modeling, statistical analysis.


... this mentions 360/67 and cp/cms at the MIT Urban Systems Laboratory (USL)
http://www.multicians.org/thvv/360-67.html

i seem to remember that NBER had outsourced and/or was running its computing on the USL cp/cms machine.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

FW: Looking for Disk Calc program/Exec (long)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: Looking for Disk Calc program/Exec (long)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 12 Sep 2004 08:25:23 -0600
Anne & Lynn Wheeler writes:
which does say NBER was at 575 tech sq ... and TROLL

Time-shared Reactive On-Line Laboratory

Array language for continuous simulation, econometric modeling, statistical analysis.


and even more drift, old article mentioning econometrics and MIT
n541 0304 19 Oct 83
By David Warsh

The United States of America vs. International Business Machines Corp. (IBM) antitrust case has been over for nearly two years now, and the accounts of it are beginning to tumble from the presses. It is clear, in retrospect, how silly it all was.

In the first place, the government case rested on a misapprehension, that IBM wouldn't unbundle its computers and sell them separately. Then the case was overtaken by the headlines. An industry of ''IBM and the seven dwarfs'' - eight companies specializing in utility-like mainframe computers - had given way to a veritable zoo of minicomputer and microcomputer manufacturers, peripheral equipment makers and software vendors, long before the government folded its hand.

Finally, IBM put up a very spirited defense. The briefs have been published now, half last spring by the MIT press under the title ''Folded, Spindled and Mutilated,'' the other half last month by Praeger as ''IBM and the U.S. Data Processing Industry: An Economic History.'' The common thread, the principal author of both books, is an owlish Concord man and MIT professor named Franklin Fisher.

Early on, IBM lawyers had figured they would need a good economist to organize their economic defense, and to serve as an expert witness. They knew where to look. ''(IBM house lawyer) Nicholas deB. Katzenbach asked (MIT's) Carl Kaysen, and Kaysen had recommended me,'' says Fisher. ''Carl was my tutor when I was an undergraduate at Harvard and he was one of the few people who knew the secret that industrial organization was a serious interest of mine.''

Fisher tumbled in. ''There is one Section 2 (of the Sherman Antitrust Act) case every 10 years, a fresh interpretation of the law once every 25 years. So here was the biggest antitrust case of our time. I thought, 'It's my chance to have a serious impact on antitrust policy. It's true I'm doing it for defense, but that's one of the ways to do it.' '' He loved it. There was a year of briefings, of plant tours, of sharpening pencils. Then came the private antitrust suits, Telex and Greyhound. The deeper he got, the harder he worked.

The trouble was that Fisher is among the most talented economists in the world, a man at the very top of a science. For a decade, he edited the journal Econometrica. He was elected to the American Academy of Arts and Sciences in 1969. He won the John Bates Clark Medal in 1973, a prize that is awarded only every two years to the best American economist under 40. Figure that the medal had been awarded eight times since Fisher's 1956 graduation from Harvard College. That made him, by the common consent of his peers, one of the top 10 economists of his generation.

Yet in effect, Fisher forsook his relationship with the MIT economics department - a department that is fiercely proud of its scientific detachment - for a decade of bloodsports with Cravath, Swaine and Moore, IBM's white-shoe outside law firm. It cost him dearly. ''I was one of the central figures in this department when this began; maybe I'm still one of the central figures, but sure in the mid-1970s I wasn't. I want to be careful about this ... not that there is anything to be careful about, but I want to be fair. I cannot say that the MIT department in any official way treated me badly or anything like that. That's not true; in fact the contrary, they were very nice to me.

''Still, it is unquestionably true that my colleagues individually plainly disapproved of what I was doing ... . I stepped outside the ethic. I did something that economists don't do. I did adversary work for money. I don't think most of my colleagues understand what the kind of work was, or the kind of excitments involved in it. I'm hoping that if they read this book, they'll decide, yes, maybe it was worth doing. It was a little like a nobleman going into trade.''

Fisher is eloquent on the temptations of the life of action. ''Life with Cravath was like nothing I'd ever seen. First, even before the case came to trial, those guys worked 12 to 16 hours a day. Those guys worked all the time. Second, they lived pretty high. One of the compensations for that was that they ate in terrific restaurants and they traveled around the country first class, sometimes in the company plane, and the strains on their personal lives were tremendous, far greater than on mine. I was involved in that, I was living off the same expense account.''

More compelling, he says, was the sense of solidarity that trial work engendered. (''Someday perhaps it will be pleasing to remember these things,'' is the stoic line in Latin at the end of the string of acknowledgements to lawyers, executives, consultants and the rest who participated in the defense.) ''It was exciting! I miss that. There was the sense that I was helping to move forward a great event. Things had to be done every day ... . There was a sense of high drama. You had to be up. I had a certain amount of withdrawal when I woke up a couple of years ago to the fact that there was nothing that desparately had to be done.''

There was the money, too: ''It was less than most people thought I made. I have yet to hear a rumor that was lower than the amount I made. I did not make $1 million aggregate from the IBM case. I was on it for 12 years. There's a sense in which no amount of money could compensate me for 12 years. I did OK, less OK than most people think.''

Perhaps the cruelest blow was that in the end the government simply folded its hand; it walked away from the courtroom. The case was thrown out, not decided. ''The IBM trial staff of the antitrust department really did not understand basic economics and the economists who testified for them either were quite weak analytically or were simply misled,'' says Fisher. But the point was that the Big Casino of a courtroom victory was denied him: the Supreme Court is not going to quote his testimony, at least on the IBM case. (Thirteen of 14 judges found for IBM on the private cases.) Fisher faced a real Hobson's choice: he could publish his testimony; or he could put it away.

He published - and the results make fascinating reading. The lawyers love it, generally; the economists are not so sure. The trouble is that the vital element of disinterest is missing. Fisher is well aware of the problem. At the beginning and end of the book, he writes movingly of the pitfalls that await the scientist who begins to think like a lawyer. Yet he has not entirely avoided those pitfalls - for example, ask him what he learned from the case and he tells you how good a company IBM is, how customer-oriented, how quality-control-conscious, how defensive it was in the wake of the consent decree - the words of an advocate, not an analyst.

There was high irony here, for there is some reason to think that the government's case against IBM was built intellectually on the epic economic case constructed by Carl Kaysen for use by the antitrust division in its successful 1950s lawsuit against United Shoe Manufacturing Corp. If so, it was a wholly inappropriate analogy, according to Kaysen. USM sold to little customers in a highly stable market, where barriers to entry were formidable. IBM operated in a fast-changing marketplace where customers had plenty of alternatives.

Frank Fisher has returned to economics. He has a new and very rarefied book on disequlibrium foundations of economics due out this fall; another work on price indices is almost ready for the press. He is a more substantial pillar of the community than ever, both as a scientist and as a citizen. He serves on many boards, has built a large and lucrative antitrust consulting practice. His stature and influence are approaching those of his mentor Kaysen.

But the sad fact is that the IBM work is being received skeptically. The episode cost him something more than the dozen years. ''In the middle 1970s ... . I sat there thinking what on Earth am I doing here, wasting my life, straining my relations with my department, certainly not getting anything done. By the end of the 1970s, when I started writing my testimony, certainly when I started writing the book, I thought, this has really turned out very well. I've been away for a long time but I've done something really serious.'' The differing satisfactions of the man of affairs and the scientist were never clearer.

END


--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Is the solution FBA was Re: FW: Looking for Disk Calc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the solution FBA was Re: FW: Looking for Disk Calc
 program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 12 Sep 2004 11:49:02 -0600
cfmtech@ibm-main.lst (Clark F. Morris, Jr.) writes:
Would the solution be to have FBA for all future development? This would be for only the FBA oriented access methods, VSAM, HFS, PDSE, etc. A new spool access method (possibly use the standard Unix file system using standard Unix spool methods) would be needed. Those applications using existing geometries would be supported with existing devices and emulations thereof but there would be no enhancement of them. There would have to be an FBA oriented tape access method but RCA figured out how to do that over 25 years ago in their follow-on to TDOS (I think that was the Spectra operating system). A GDG facility would have to be added to VSAM, at least for ESDS data sets but again, we could have used that years ago. The co-existence period would be 10 - 15 years. In terms of approach, I suspect only the IBM mainframe still has a geometry oriented disk approach although the Unisys 2200 series follow-on may be equally arcane. The advantage of FBA for new development and access methods that already are FBA means there is less chance of breaking existing code. Incidentally, FBA device handling should be designed so that devices can be single file system design if that makes sense: HFS only, VSAM only, PDSE only, etc..

Not related but possibly easiest to make provision for at time of change is the enlarging of all name fields. 8 bytes really is restrictive for member names compared to other environments. 44 characters is restrictive compared to other environments. In addition the EBCDIC character set will by slowly supplanted by Unicode for many other environments and I believe that z/OS should follow suit. All told the 64 bit upgrade is an ideal time to map a long term strategy because so many control blocks are afflicted anyway. In general field and record size limitations that once made sense have become obsolete. Several of the financial sites I use have passwords that range from 8 to 32 characters with special characters allowed. Many sites have a sign-on id greater than 8 characters. If we want uniqueness within a complex plus a modicum of user friendliness, 8 is not enough.

In general, we and IBM have to determine if it is worth evolving the mainframe rather than working toward migrating to platforms that don't have many of the limitations we face.


over 20 years ago ... the statement was that even if fully integrated code was dropped on the door step ... it would still cost (at the time) $26m to ship it. the FBA solution also addressed a lot of channel extension latencies ... as well as heavy resource useage by various multi-track implementations.

misc. past references to providing FBA support to the access method group ... past references to cost of shipping fully tested and integrated FBA support:
http://www.garlic.com/~lynn/97.html#16 Why Mainframes?
http://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
http://www.garlic.com/~lynn/2000.html#86 Ux's good points.
http://www.garlic.com/~lynn/2000f.html#18 OT?
http://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
http://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
http://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
http://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters

misc past threads about ckd multi-track operations:
http://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice
http://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
http://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
http://www.garlic.com/~lynn/2000f.html#19 OT?
http://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
http://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001d.html#60 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
http://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2002.html#6 index searching
http://www.garlic.com/~lynn/2002d.html#22 DASD response times
http://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
http://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2002n.html#50 EXCP
http://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
http://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
http://www.garlic.com/~lynn/2003f.html#51 inter-block gaps on DASD tracks
http://www.garlic.com/~lynn/2003k.html#28 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003k.html#37 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004d.html#63 System/360 40 years old today
http://www.garlic.com/~lynn/2004e.html#42 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#51 Channel busy without less I/O

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

"Perfect" or "Provable" security both crypto and non-crypto?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Perfect" or "Provable" security both crypto and non-crypto?
Newsgroups: sci.crypt
Date: Sun, 12 Sep 2004 17:54:25 -0600
Undisclosed writes:
do you think there could ever be a practical computer that is proof against all known virus types? I'm trying to read Bishop's "Comp Sec Art and Science" book now, and he seems to imply that all computers that protect against viruses in a deep through way are also totally useless for almost all tasks since they overly restrict a computer's flexibility.

two years ago there was some claim that 1/3rd of the exploits had been buffer overflows (frequently related to c-programming implicit length handling), 1/3rd virus/trojans, and 1/3rd social engineering. i believe the ratios have changed ... in part because of the large increase in email-based attacks (especially social engineering).

some amount of the virus/trojans have been infrastructures that will automatically execute stuff arriving over the network. that paradigm has known to be a vulnerability for 30 years or more ... and there have been some number of systems that preclude such capability.

the larger scope is granting some level of trusted execution privileges to suspicious code ... either because of automated execution facilities or through social engineering ... convincing the end user that they to grant the privileges.

two contributing factors have been

1) system infrastructure that was designed to operate in a stand-alone environment (say the kitchen table where the owner loaded games) or at least in a non-hostile, non-adversary environment

2) system infrastructures that were dependent on knowledgable staff to make decisions about what executables were enabled and not-enabled

some combination of these two factors were introduced into internet connected environment that are owned and operated by unskilled owners.

one could make the comparison with automobile paradigm ... where the automobile owner is held accountable for maintaining and operating the vehicle in a safe & prudent manner (even if the automobile owner has absolutely no technical skill with regard to automobile mechanics).

are automobiles perfect and have provable security?

there is the multics paper from a couple years ago ... that observed that multics had none of the most common (technical) exploit characteristics (presumably even with trained staff there might be periodic lapses involving social engineering exploits).

couple random refs:
http://www.garlic.com/~lynn/2002e.html#47 Multics_Security
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2004j.html#41 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Is the solution FBA was Re: FW: Looking for Disk Calc

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the solution FBA was Re: FW: Looking for Disk Calc
program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 12 Sep 2004 19:08:34 -0600
bblack@ibm-main.lst (Bruce Black) writes:
I haven't worked with VM or VSE for over 25 years, but I believe that both opsys already supported FBA disks. IBM had channel attached FBA disks a long time ago (the 3370 was the last one). I know that EMC Symmetrix can still emulate them and perhaps other vendors as well. So the FBA support was already there, only the SCSI/FIBER support had to be added.

But OS systems have never supported FBA disks and it will be difficult to do because of the dependance on VTOCs, etc. But not impossible, if there is a good business case for IBM to develop this support.


basically VM (and cms filesystem) simulated fixed block architecture on ckd disks from their original implementation in the mid-60s .... they never used ckd disks as count-key-data architecture ... always setting up the format so they could be treated as logical fixed-block.

the one exception was the multi-system, loosely-coupled support originally implemented for the hone system
http://www.garlic.com/~lynn/subtopic.html#hone

the hone systems was the field, sales, and marketing support platform .... that was used world-wide .... for things like configurators ... i.e. salesman entering the customer specifications ... which the configurators then translated into system order specifications.

basically each pack had a use-map of the related mini-disk semantics .... and all sysetms in the multi-system complex would use CKD ccw sequence to simulate a logical compare&swap operation against the pack use-map (w/o requiring reserve/release) to update the current state .... allowing the propagation of minidisk access rules across all systems in a multi-system, loosely-coupled complex.

as in the previous post ...... it wasn't reall the technical difficulty for mvs that was the real issue.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Is the solution FBA was Re: FW: Looking for Disk Calc

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the solution FBA was Re: FW: Looking for Disk Calc
program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 13 Sep 2004 03:06:15 -0600
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
Every time someone says "I don't believe in theories" another theory dies. Everything that I've seen suggests that the decision was based on IBM inernal politics rather than technical difficulty.

so over 20 years ago, i'm told that if i deliver fully tested and integrated mvs fba support, it will still cost $26m to ship.

previous pieces in thread:
http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004l.html#22 Is the solution FBA was Re: FW: Looking for Disk Calc

so it comes down to business justification ROI for the $26m.

in a somewhat constrained business resource environment, the proposal competes with new $26m feature/function projects possibly claiming $500m gross in new business. the first order calculation for immediate new business for MVS FBA support is that the customer buys the same amount of disk ... it is just different disk (aka no net new business, i.e. no return on investment).

The MVS FBA support business case tends towards efficiency issues and reducing the long-term cost of doing business .... both for development and customers. that has alwas been a much harder case to demonstrate improving the business bottom line ... especially in competition with new feature/function projects promising direct ROI new business.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Is the solution FBA was Re: FW: Looking for Disk Calc

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the solution FBA was Re: FW: Looking for Disk Calc
program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 13 Sep 2004 09:18:18 -0600
bblack@ibm-main.lst (Bruce Black) writes:
I haven't worked with VM or VSE for over 25 years, but I believe that both opsys already supported FBA disks. IBM had channel attached FBA disks a long time ago (the 3370 was the last one). I know that EMC Symmetrix can still emulate them and perhaps other vendors as well. So the FBA support was already there, only the SCSI/FIBER support had to be added.

note also that the original FBA emulation of CKD was on florence/3375 .... i.e. 3370 FBA disk with emulated CKD built on top. now, i've been told that all CKD is done that way, there aren't any real CKD disks any more ... they are all FBA with CKD emulation built on top.

minor ref to all current CKD disks actually emulated on FBA ... as well as list of various products down thru the years:
http://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill

in theory, if long ago and far away, FBA support had been deployed ... in addition to eliminating various kinds of CKD-relating software development down thru the years ... it could have eliminated the need for all sorts of hardware CKD-emulation efforts.

misc previous post in thread
http://www.garlic.com/~lynn/2004l.html#18 Looking for Disk Calc program/Exec
http://www.garlic.com/~lynn/2004l.html#19 Looking for Disk Calc program/Exec (long)
http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004l.html#22 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

md5 algorithm

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: md5 algorithm
Newsgroups: sci.crypt
Date: Mon, 13 Sep 2004 09:25:53 -0600
during crypto 2004 ... i was asked if i had RFC dependency tree for (md5) RFCs at
http://www.garlic.com/~lynn/rfcietff.htm

turns out the md5 rfc isn't even standard or on standard track ... just an informational rfc. i had been meaning to add references to the index for some time ... but even that wouldn't have caught all rfcs that make reference to md5. so i both did the scan stuff to recognize references section in RFCs ... and pull out references to other RFCs ... as well as specifically scan for "md5" and built a separate reference for all RFCs that make any reference at all to "md5".

so at
http://www.garlic.com/~lynn/rfcietff.htm

in the "Sections:" section, go to the bottom of the list of sections ... and click on "Special list of RFC's referring to MD5".

... 1321, "The MD5 Message-Digest Algorithm" ... is the MD5 informational RFC.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

CTSS source online

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CTSS source online
Newsgroups: alt.folklore.computers
Date: Mon, 13 Sep 2004 12:55:23 -0600
Dave Daniels writes:
For those of us who know nothing about CTSS, can anyone give a brief description of it, for example, what were the main ideas behind it, what the user environment was like, what it was like to use, its strengths, weaknesses and so forth?

melinda has some amount of discussion of ctss in her vm history paper ... somewhat tracing common heritage of vm & multics back to ctss. copies of the paper can be found in various formsts (ps, pdf, listing, etc) at:
http://www.leeandmelindavarian.com/Melinda/

i had previously referenced an early version of the paper, posting to vmshare and not available in the vmshare archives, recent ref:
http://www.garlic.com/~lynn/2004k.html#49

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Tue, 14 Sep 2004 09:14:34 -0600
K Williams writes:
In my thirty years at IBM, I never went to the machine[*]. At first there were terminal rooms (and the Senior Engineer across the hall from me let me use his office/phone and 2741/acoustic coupler). It was about 1976-78 when they started dispersing terminals into offices. I guess the PHB's noticed that they could save floor-space. ;-) Hardware types generally had a 3277GA (24x80 CRTs with added Tektronix storage graphics tube), while the programmers just the 3277.

there was period in that time-frame where it required vp-level approval to get a terminal. we did a business case showing that 3yr amortized cost of terminal was about the same as business phone ... and when was it required that vp-level approval was required to get business phone on employee's desk.

of course cambridge
http://www.garlic.com/~lynn/subtopic.html#545tech

had 2741 at everybody's desk ... and i got a home "2741" in march of 1970.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Tue, 14 Sep 2004 09:23:18 -0600
of course, then there was Jim Gray's MIPENVY as he was departing to tandem. random mip envy refs:
http://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
http://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
http://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

FW: Looking for Disk Calc program/Exec

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: Looking for Disk Calc program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 14 Sep 2004 12:57:18 -0600
ronhawkins@ibm-main.lst (Ron and Jenny Hawkins) writes:
Eric,

Since Cache was introduced, the practice of clustering datasets around the VTOC is probably more likely to increase seeking - exactly what it was designed to reduce. The datasets closest to the VTOC are usually the busiest, and they will be fetched from cache.

This means that those datasets that are not quite busy enough to be in cache all the time, are divided by the disk space used for the busiest datasets. Ipso Facto you have increased seek.

If you want to order your datasets, then the practices on cached DASD is VTOC at the front, and then arrange your datasets in descending order of IO activity.

If you really want to use the old method, then activity should be based on SSCH minus Read hits. This will be pretty close to the actual IO rate on disk.

More importantly you should be using something like Cruise Control or Volume migrator to put busy volumes close together in the array groups. Variable Bit mapping on modern disk drives means you have fast and faster areas on the spindles and some workloads can and do take advantage of this.

Ron


when undergraduate, I had started carefully constructing system disk packs with os/mft11 ... basically i took apart the stage2 output (from stage1 sysgen) and re-arranged the steps ... and the move/copy statements within steps to both control the position of the dataset ... but also members within PDS's.

for the typical jobstream at the university ... this careful positioning, speeding up thruput by a factor of approx. three times. i gave a number of presentations on the effort at both share & guide (as well as other activities involving re-writing major sections of cp/67, writing terminal support and cms editor syntax for HASP ... for early crje, misc. other things).

however, the vtoc position was fixed at the front of the pack.

it was release 15/16 ... that first allowed specifying the cylinder position for vtoc ....

the big issue with disk caching started with ironwood/sheriff 3880 caches:
http://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill

there was some slant in the original announcement for 3880-13/sheriff full-track cache announcement. they ran an application that had 10 4k records formated per track ... and claimed 90% cache hit rate. it turns out that the application was sequentlly reading a file w/o any overlap or blocking; so the first request for a record on a track was a miss, then the next 9 requests were treated as "hits". if the application had specified full-track blocking ... the 3880-13/sheriff hit rate would have dropped to zero percent.

misc. references to carefully crafted stage2 sysgen for optimal placement and arm motion:
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
http://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
http://www.garlic.com/~lynn/2001d.html#48 VTOC position
http://www.garlic.com/~lynn/2001h.html#12 checking some myths.
http://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
http://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
http://www.garlic.com/~lynn/2002.html#52 Microcode?
http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
http://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
http://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
http://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
http://www.garlic.com/~lynn/2004k.html#41 Vintage computers are better than modern crap !

misc. refs to outboard controller caches:
http://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
http://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
http://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
http://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
http://www.garlic.com/~lynn/2001l.html#54 mainframe question
http://www.garlic.com/~lynn/2001l.html#55 mainframe question
http://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
http://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
http://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
http://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
http://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
http://www.garlic.com/~lynn/2002f.html#20 Blade architectures
http://www.garlic.com/~lynn/2002f.html#26 Blade architectures
http://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
http://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
http://www.garlic.com/~lynn/2003i.html#72 A few Z990 Gee-Wiz stats
http://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain the

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the
Middle East
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 07:55:16 -0600
jmfbahciv writes:
Yes, I know that. FYI, the SS funding problems has to do with the fact this is a Ponzi scheme. Less and less people will have to support more and more retirees. The SS program was designed to only service a few people, not the complete population. Compare death expectancy ages back then and now.

wasn't there some number that when social security started there was something like 40-50 paying in for every person receiving social security (something like the pay-in/person was approx. 2 percent of somebody's receiving)??

the projection is that possibly by 2040(?) that there will be something like 2-3 people paying in for every person receiving social security (i.e. somethinke like the pay-in/person will be approximately 30-50% of somebody's receiving).

part of the solution has been to keep raising the starting age that somebody can receive social security ... hopefully reducing the length of time receiving social security thereby reducing the percentage of the people receiving social security ... and possibly incenting people to also work longer and thereby increasing the percentage of people paying into social security; aka trying to keep the ratio of people paying into social security to the people receiving social security higher that 2:1 to 3:1 ... maybe as high as 5:1.

sort of ancillary observation is that, to first approximation if the SS ratio is 2:1 and the benefits are as high as the salary of the people paying in ... then social security rate would have to raise to 50 percent (rather than the current approx. 15 percent).

past social security threads
http://www.garlic.com/~lynn/2004b.html#9 A hundred subjects: 64-bit OS2/eCs, Innotek Products,
http://www.garlic.com/~lynn/2004b.html#21 A hundred subjects: 64-bit OS2/eCs, Innotek Products,
http://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
http://www.garlic.com/~lynn/2004d.html#14 The SOB that helped IT jobs move to India is dead!

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 08:12:17 -0600
jmfbahciv writes:
<ahem> ;-). However, in the case of developers, if one really deal with fiscal reality, those costs estimates should have been turned into piles and piles and piles of money.

all sorts of 2nd order detail. since ibm also produced the terminals, you actually wouldn't have to use the retail list price; if it was market downturn and customers weren't buying all the terminals anyway, then the terminals might otherwise be in warehouse and so using the terminals might not actually hit the bottom line at all.

giving all the developers their own terminals and online computing resources ... make them more productive ... which means that they should be producing more/faster products that can be sold and earn revenue. the net increase in productivity benefit is much larger than the 3yr amortized "list" price of the terminal (turns out 3yr amortized list price was overdoing it ... frequently those terminals remained in service for ten years).

jim's mipenvy memo raised the issue that it was resources in general that make people more productive ... general computing resources, as well as the tools that make those computers more useful, online communication, etc.

the counter was if they all had their own terminals ... then there would need to be more computing resources to enable the increase in productivity. so the counter-counter was that if computing resources were productivity limiting factor .... then the ROI on computing was possibly constant .... and the issue was using less computing per unit time lengthened product delivery time ... and also employee costs per product. Increasing the effective computing, shorten product delivery time and reduced employee cost (fewer person weeks to develop product).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 08:30:07 -0600
K Williams writes:
Yikes! It wasn't that bad in P'ok (though the phones went with offices, not employees). A VP approval may have been needed for an office terminal up until perhaps '76/'77, but not after that. Indeed we had an assembly line in our lab for 3277GAs and 5100s. Our techs were building 3277GA attachments for the entire site and a coop got ahold of the BOM for a 5100 and started ordering every part. We made something like 100 5100s before we got caught. ;-)

so pointing out that 3yr amortized list price of terminal was about the same as business phone ... which nobody questioned.

however, there was another business issue (if you weren't building your own) .... all expenditures had to be included in the yearly budget .... so the whole corporation had to plan ahead for terminal deliveries ... so there was some startup discontinuities.

there was a point during the transition to uptake of terminals on every desk ... where the yearly predictions fell behind the actual uptake (near the start of the steep portion of the uptake curve).

jim and I had been sitting around one friday night discussing what would help with executive and middle management uptake for terminals on their desks (since when that happened .... it sould quickly follow that terminals on desks were accepted items). all this work had been going on with the internal network and email in the 70s ... misc internal network refs:
http://www.garlic.com/~lynn/subnetwork.html#internalnet

so the two silver bullets we came up with were email and the online phone book (which started out with phone numbers and a spattering of email addresses ... but appearance of email addresses went fairly quickly)

well, to make a long story short ... there was this point when the chairman started sending email, so all of his direct reports needed terminals for email, and then all of their direct reports needed terminals for email, etc. There was a six month period ... that nearly all of the allocated terminals for internal developers got vacummed up (pre-empted deliveries) by executives and middle management discovering that the most important thing was that they have a terminal ... because all the other executives were getting terminals.

misc online phone book posts ttp://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
http://www.garlic.com/~lynn/2001j.html#29 Title Inflation
http://www.garlic.com/~lynn/2001j.html#30 Title Inflation
http://www.garlic.com/~lynn/2002e.html#33 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
http://www.garlic.com/~lynn/2004c.html#0 A POX on you, Dennis Ritchie!!!

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain
theMiddle  East
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 12:41:33 -0600
CBFalconer writes:
I'm not about to do it, but the fundamental calculations can be done with no more than a mortality table and an interest rate. If interested, look up 'actuary' or 'actuarial'.

as i mentioned before ... we've had this thread before ... the recent post:
http://www.garlic.com/~lynn/2004l.html#30

there was references to previous posts ... which makes references to calculations having been performed:
http://www.garlic.com/~lynn/2004b.html#9

in the above post, i had extracted pieces from a referenced ssa document which can be found on the www.ssab.gov web site, repeating the extraction from the previous post:
in 2001, there was $604 billion paid into SS and $439 billion was being paid out. SS accounts for 24 percent of total Fed. gov. spending and 23 percent of total Fed. gov. receipts.

by 2030, 20 percent of the population is expected to be age 65 or over (compared to 12 percent in 2001).

chart 5 shows 5.1 workers per SS beneficiary in 1960, dropping to 1.9 workers per beneficiary by 2075 (doesn't show SS starting out with something like 40 workers per beneficiary)


--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain the

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the
Middle East
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 12:57:28 -0600
in some mailing list, somebody posted something about have discovered boyd's biography. attached is three of my comments on the subject.

===============================
part 1 of 3
misc. listing from amazon:

The Mind of War: John Boyd and American Security
http://www.amazon.com/exec/obidos/tg/detail/-/158834178X/

Boyd : The Fighter Pilot Who Changed the Art of War
http://www.amazon.com/exec/obidos/tg/detail/-/0316796883/qid=1085604271/sr=1-2/

A Swift, Elusive Sword: What if Sun Tzu and John Boyd Did a National Defense Review?
http://www.amazon.com/exec/obidos/tg/detail/-/1932019014/qid=1084708040/sr=8-1/

in gulf war I ... they were supposed to roll them all up .... but somebody decided not to follow thru and let them slip away.

lots of my boyd references &/or stories
http://www.garlic.com/~lynn/subboyd.html#boyd

lots of other boyd references around the web
http://www.garlic.com/~lynn/subboyd.html#boyd2


=================================
part 2 of 3

... oops sorry for figure slips ... this mail client doesn't handle sub-article replies at all.

i had privilege of sponsoring his talk a number of times in the early 80s ... and still have stacks of his presentation from those talks. i got to see the organic design for command and control talk from relatively short to a couple hrs.

a postings on business subject he discussed ... that i don't think is mentioned in his papers
http://www.garlic.com/~lynn/2004l.html#11

basically there was an issue of the large organization rigid top-down infrastructure being taught in the army during ww2 to a lot of young impressionable men .... and it started to show up in large commercial organizations in the 60s & 70s as these men took on executive positions. it is somewhat an underlying premise behind the organic design for command and control talk

it wasn't that not a lot of people hadn't heard of him ... they just didn't pay attention ... there was us news & report article during desert storm that talked about his fight to change how america fights .... and all his "jedi knights" (i.e. the young crop of majors and colonels that had come under his influence).

there was an underlying theme that he was a maverick and there were gobs of people that didn't want to see him recognized (he had lots of those stories ... that also don't show up in his papers).

i keep trying to get down to see the marine museum .... where his papers have been donated .... and some number of them scanned and online. i have some specific early 80s hardcopy versions that aren't listed in the inventory. one of the reasons that i sponsored his talk in large commercial computer corporation ... was, in part, because of the business connections.

there is a story that for one of the talks .... i wanted corporate employee education to sponsor it. at first they were agreeable ... but then after getting more detailed information ... they changed their mind and declined. they specifically mentioned that it would be more appropriate for a more targeted audience involving business planners and forecasting and the competitive analysis people .... and not for general employees.

=========================================
part 3/3
quoting what they actually said would be talking out of school, now wouldn't it?

john was very expressive when he talked. for presentations, his foils were/are "black" with text .... and he could be very animated when talking.

sitting down, talking to him, one-on-one ... could be tiring .... he could have several threads going simultaneously and arbitrarily switch from one thread to another ... with little or no queues ... and his hands would be in motion (as if he was practicing OODA-loops and juggling several facets in real time).

the autobiography mentions discussions where he is in your face, punctuating statements with stabs of his cigar ... i don't remember such simple, single threaded conversations .... although his presentations tended to following what was on the screen.

i remember one-on-one conversations, trying to track all the different threads that were being discussed simultaneously ... and attempting to make replies within the appropriate context


--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain
theMiddle  East
Newsgroups: alt.folklore.computers
Date: Wed, 15 Sep 2004 14:33:50 -0600
CBFalconer writes:
I'm not about to do it, but the fundamental calculations can be done with no more than a mortality table and an interest rate. If interested, look up 'actuary' or 'actuarial'.

current rate is 15 percent ... which is close to 6:1 ratio; aka if you are self-employed ... you pay the full amount in your tax return; if you aren't self-employed ... then it is sort of disguised with only half of it showing up on the employee tax return ... and the other half paid behind the scenes by the employer. whether it is included in the employee tax return or not ... the employer still has to figure it as part of employee costs.

while SS has been staight as you go payment .... the past couple years they've somewhat inflated the annual collections to be greater than the annual benefits; supposedly they are trying to accumulate a little surplus to somewhat cover some anticipated future shortfalls (see the previously referenced SS report for the detils). however, in no way is SS a real "retirement" plan ... where your future payouts come out of some account that you have been paying into. another interesting thing is that the past several years where SS has been adjusted to have a surplus (annual collections exceeding annual benefits) ... there have been some federal budget reports where the annual SS surplus is added to the total annual federal budget revenue.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

FW: Looking for Disk Calc program/Exec

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: Looking for Disk Calc program/Exec
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 15 Sep 2004 21:40:31 -0600
ronhawkins@ibm-main.lst (Ron and Jenny Hawkins) writes:
VTOC goes at the front so you have a larger contiguous extent for allocation. You could put the VTOC at the end of the volume if a VTOC is active enough o be a busy file, then (a) it is probably accessed from cache; and (b) you have a more important problem to fix.

so, the no cache. simplest scenario

vtoc is busiest and goes at cylinder zero

then there is two cylinders of next busiest and equal busy data that goes at cylinder one and two. when the arm is not at the vtoc ... it is either at cylinder one or cylinder two ... and has to travel an avg. distance of 1.5 cylinders back to the vtoc. when it is at the vtoc and has to move from the vtoc, then it has to travel an avg. of 1.5 cylinders away.

vtoc is busiest and goes at the middle cylinder N. there there is the next two cylinders of next busiest and equal busy data data that goes at cylinder N-1 and N+1. when the arm is not at the vtoc ... it is at either cylinder N-1 or cylinder N+1 ... and has to travel an avg. distance of 1 cylinder back to the vtoc. when it is at the vtoc and has to move from the vtoc, then it has to travel an avg. of 1 cylinder away.

so w/o a cache the best stragegy is to put the busiest data in the center and place the other data in order of activity out from the center, order proportional to the disk access frequency.

the issue with large caches ... is that high activity data is maintained in the cache ... and therefor high activity data doesn't translate into high activity disk access. placing the highest activity data in the center ... with a large cache ... creates a disk access dead zone in the middle of the pack. in the radiating out from the center scenario ... the arm is travelling back and forth across the central dead zone (represented by the highest activity data resident in cache and no physical disk arm activity is actually required).

when it is not possible to arrainge data on disk by actual physical arm access ... possibly knowing just data access patterns and possibly not being able to predict cache residency ... then the one-way allocation strategy is used to avoid having large (central) dead zone access areas that the arm has to continually travel accoss. cache residency would tend to load from highest frequency data arrainged from the start of the pack .... and the arm avoids that dead zone altogether ... concentrating on the boundary fringe of the highest used data not in cache and in the direction opposite of the dead zone (i.e. data tending to be resident in cache) .... as opposed to having to constantly travel across the dead zone to access data on both sides.

the trick in the data-access-frequency order allocation is to reduce the avg. arm travel distance. in the no cache scenario ... data-access-frequency ordering from the center minimizes the arm travel distance. in a large cache environment, such a strategy can have the highest accessed data loaded in cache .... so there is little or no need for physical arm access to to that data. with a purely center-out frequency allocation strategy ... and a large central dead zone because of large cache residency ... the avg. arm distance travel is increased by having to constantly travel back and forth across the dead zone (to data on either side).

a distinction is in the early non-cached disk models ... there is a close correspondance between data access frequency and disk arm position access frequency .... which is no longer valid assumption in large cache enmvironment.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain
theMiddleEast
Newsgroups: alt.folklore.computers
Date: Thu, 16 Sep 2004 08:53:32 -0600
jmfbahciv writes:
Your precious Democrats did that robbing. What are you talking about? It is the retirees who are doing the robbing.

it is problem with non-fully funded retirement plans ... some number of companies appear to have taken to emulating SS and paying retirees out of current operating budget .... as opposed to having fully funded retirement plans ... where money is set aside for current workers that covers their future retirement.

as mentioned in a similar thread in this newsgroup a couple months ago ... there was a report about companies (especially in the manufacturing sector) that don't have fully funded retirement plans ... meeting current retirement payments out of current revenue. there were several companies in the steel industry specifically mentioned that are looking at declaring bankruptcy (and/or disolving) to get rid of their retirement obligation since current retirement payments can be on the order of half of current revenue.

in the possibly boon decades of the 50s, 60s & 70s ... they were growing and rather than set aside large sums of money for future retirement ... they could pay out the money in real time salaries (and existing retirees were small, then current obligation).

several times in these threads ... that has given rise to the ponzi analogy ... that current payouts are a small precentage of the current money coming in and future payment obligations are dependent on ever increasing base providing the money.

in general, the ones at the start are getting payed significantly more money than they ever paid in ... and the ones in the middle appear to not want to change that equation ... preferring immediate compensation ... and side-stepping the issue of how much a fully funded plan would actually cost (aka much higher percentage of their wage). the ones that operate such plans are always hailed by the early beneficiaries.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Actuarial facts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Actuarial facts
Newsgroups: alt.folklore.computers
Date: Fri, 17 Sep 2004 09:19:44 -0600
K Williams writes:
Now you're in the right universe. Change "bothered by it" to "bothered by its sunset" and you're on your way to the truth. It was a dumb law, should never have been passed, and is now a was.

some may consider the "can-spam" law in the same genre .... it doesn't seem to have had any material reduction on the amount of spam ... and it somewhat defused getting a law passed that might actually affect spam.

the amount of spam that i was getting went up by at least a factor of four times in the six months after the law went into effect ... compared to the period before the law went into effect.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain
theMiddleEast
Newsgroups: alt.folklore.computers
Date: Fri, 17 Sep 2004 10:53:21 -0600
possibly totally unrelated but pension news item from today:
http://www.boston.com/business/technology/articles/2004/09/17/ibm_settles_part_of_giant_pension_lawsuit/'

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

"Perfect" or "Provable" security both crypto and non-crypto?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Perfect" or "Provable" security both crypto and non-crypto?
Newsgroups: sci.crypt
Date: Fri, 17 Sep 2004 11:05:17 -0600
"Roger Schlafly" writes:
I don't know what Doug had in mind, but there are lots of ways that buffer overruns can occur in any language.

Consider a program that reads from a data stream (such as a file or internet socket), and writes to another stream. It reads a particular data field, for which the specs say that it will be null-terminated and less than 64 bytes long. The program reads the data into a larger data structure, and ignores the 64-byte limit because it assumes that the null terminator will be there. Then all sorts of bad things can happen.

Such buffer overflow bugs can occur in Java or Perl or anything else, and such bugs are common. Those languages are a lot safer than C because a simple string copy is not going to blow the stack, but there can be other buffer overrun bugs.


but assertion is that c programming conventions can increase the occurance of such overrun bugs by a factor of one to two orders of magnitude.

the multics (written in pli) study claims that there was never such a problem in multics system.

previous post in thread ...
http://www.garlic.com/~lynn/2004l.html#21 "Perfect" or "Provable" security both crypto and non-crypto?

part of the issue is security proportional to risk .... if the risk is one hundreds times greater ... then people might be included to pay more attention to it than other security risks that might have significantly lower rate of occurance.

some recent threads in other n.g. discussing relation between programming language and predisposition to buffer over run/flows:
http://www.garlic.com/~lynn/2004j.html#37 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004j.html#38 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004j.html#58 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004k.html#2 Linguistic Determinism
http://www.garlic.com/~lynn/2004k.html#5 Losing colonies
http://www.garlic.com/~lynn/2004k.html#6 Losing colonies
http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

"Perfect" or "Provable" security both crypto and non-crypto?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Perfect" or "Provable" security both crypto and non-crypto?
Newsgroups: sci.crypt
Date: Fri, 17 Sep 2004 13:54:12 -0600
"Roger Schlafly" writes:
Among sloppy programmers, that is probably true. But you won't get bug-free code by re-training them all as Java coders.

no judgement at all about the attributes of the programmers .... just the general frequency of occurance w/o knowing anything at all about the programmers. possibly some analogy to seat belts and traffic deaths. seat belts don't seem to have effect on traffic accidents ... but there appears to be some amount of data that they have effect on traffic deaths (news report last night that seat belt use has hit 80 percent for the first time).

the issue was that if 1/3rd of all exploits are buffer overflows (it use to be the majority, the amount of buffer overflows doesn't appear to have decreased .... it is that the number of virus and phishing exploits have exploded). question is, if different programming paradigm could eliminate 99% of those exploits .... would it be worthwhile? it doesn't eliminate all the others .... but it still might be a worthwhile effort.

somewhat an aside, supposedly social engineering exploits are another 1/3rd (and there isn't likely to be much changes in programming paradigms could do to address those exploits).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Acient FAA computers???

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Acient FAA computers???
Newsgroups: alt.folklore.computers
Date: Fri, 17 Sep 2004 22:03:54 -0600
Philip Nasadowski writes:
So, popular folklore has it that the FAA still has a bunch of acient IBM and whatnot mainframes for the ATC system.

So, what's the real scoop - what's the oldest machine they've got out there.

And heck - while we're at it - what's the oldest mainframe out there still running in some capacity or another???


pieces of old threads:
http://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/2001h.html#15 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001h.html#17 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
http://www.garlic.com/~lynn/2001i.html#14 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001i.html#15 IBM 9020 FAA/ATC Systems from 1960's

the guy that ran the program went on to be president of FSD ... and then left to form his own company. he is also author of childrens' books under psuedonym.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Actuarial facts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Actuarial facts
Newsgroups: alt.folklore.computers
Date: Sat, 18 Sep 2004 08:44:42 -0600
CBFalconer writes:
Cars used in robberies do not necessarily leave their identification behind. A fired gun automatically does, in the form of cartridges and/or bullets. These are almost as identifiable as fingerprints, and connect to the gun in question. Without the actual database (including a recording of that "fingerprint") the tracking requires the actual gun. A suitable database would bypass that need. I believe the NRA is also opposed to any such database, which requires factory test firings and recordings from any weapon.

there was a program on last night ... about requiring permits to buy fertilizer .... adding identifying compounds to batches of fertilizer so you could fingerprint fertilizer, databases of fertilizer, etc. while not the media attention that gun control laws might have ... there has been some amount of serious attention paid to fertilizer control laws.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sat, 18 Sep 2004 09:17:50 -0600
Morten Reistad writes:
This is one thing we've had a (friendly) in-fight about at work lately. It seems that declarative languages doesn't count as programming in most people's minds. Report writer is a prime example of a declarative, not a procedural language.

You imply that you don't consider declarative programming "code"; what is it then?

Such declarative languages are all around us; and some are exceptionally powerful. Examples are sendmail's language, and asterisk's, and several of my own projects. When it has tests, variable substitution, iteration/looping constructs and control over I/O it is a language; not a config file. Awk is a hybrid form, as it can morph both ways.

But the declarative form seems to defy that it is defined as programming. The latest three projects of mine have had significant amounts of code in their internal language; and this has brought down code size by an order of magnitude. I do these things as a matter of routine on large projects now. The compiled code sees very little change because of this; most people just want to change surface structures.


and for some historical computer references to other than rpg ... the nomad, ramis, focus genre
http://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
http://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
http://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
http://www.garlic.com/~lynn/2003m.html#33 MAD Programming Language
http://www.garlic.com/~lynn/2003n.html#12 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2004e.html#15 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
http://www.garlic.com/~lynn/2004j.html#52 Losing colonies

at about the time sql was being developed for system/r by sjr (ibm san jose research), qbe (query by example) was being developed at ykt (ibm watson/yorktown research) ... and quel work was going on up in berkeley.

some specific posts mentioning qbe
http://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
http://www.garlic.com/~lynn/2002o.html#70 Pismronunciation
http://www.garlic.com/~lynn/2003n.html#11 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2003n.html#18 Dreaming About Redesigning SQL

lots of random system/r references
http://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

"Perfect" or "Provable" security both crypto and non-crypto?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Perfect" or "Provable" security both crypto and non-crypto?
Newsgroups: sci.crypt
Date: Sat, 18 Sep 2004 09:02:38 -0600
Paul Leyland writes:
I'm not familiar with the US experience, but in the UK seat belt usage has been very high, well over 95%, for some years. It has certainly had an effect on traffic deaths. Deaths amongst motorist have fallen and deaths amonst pedestrians and cyclist have risen. It's not entirely clear how much of the change is due to seat belt usage and how much to other developments that have made motorists feel safer and, indeed, become safer but it is fairly clear that seat belt usage has made a change.

... for lots of drift ... there is some amount of variation across the regions and states .... in nyc one of the traffic death statistics is now pedestrian death that happens when a car is turning. in some cases, the absolute values for pedistrian deaths hasn't changed ... but the relative percent has changed because of fall in other categories.

the performance optimization corollary is there is always some other bottleneck; you eliminate the current one, there is another lurking behind it; however the degree of the bottleneck may be decreasing.

in any case, the assertion is that some things might not represent perfect security solutions ... but there may be some issues where changes can have significant statistical security difference (like the issue of implicit lengths in common c programming useage). and statistical security differences then is related to security proportional to risk i.e. as in performance optimization ... shifting/changing what is the most important critical issue(s).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 19 Sep 2004 07:37:28 -0600
jmfbahciv writes:
One would think so but my one failure of finding something that had been lost was JMF's coffee cup. And, anybody who saw the inside of his cup would also wonder why someone would take it. But it did. I lost its scent on the first floor which meant that a marketing type stole it.

i had a number of cups at work, one given to me by one of my offspring from one of their stays at DLI in monterey ("we learn russian so you don't have to") ... the other cups weren't bothered ... but somebody managed to walk off with that one.

my brother used to be regional marketing rep for apple .... one of his gimmicks was visiting people's office and really gushing over some neat coffee cup (from some other company) and asking could they stand to part with such a neat cup in return for an apple mug.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 19 Sep 2004 07:47:43 -0600
Anne & Lynn Wheeler writes:
my brother used to be regional marketing rep for apple .... one of his gimmicks was visiting people's office and really gushing over some neat coffee cup (from some other company) and asking could they stand to part with such a neat cup in return for an apple mug.

... and if he had to, he would be even be willing to trade 2-3 apple mugs for such a neat cup (and in some cases, maybe even a whole box for everybody on their staff).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 19 Sep 2004 08:07:41 -0600
jmfbahciv writes:
Did people swap?

he claimed it nearly always worked ... he kept a few ... he didn't have space to keep all the competitors mugs he managed to collect.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

"Perfect" or "Provable" security both crypto and non-crypto?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Perfect" or "Provable" security both crypto and non-crypto?
Newsgroups: sci.crypt
Date: Sun, 19 Sep 2004 08:51:20 -0600
although boyd was instrumental in creating the f16 (and contributed to both f15 & f18):
http://www.garlic.com/~lynn/subboyd.html#boyd2

he was highly critical of some "advanced" technology that was dreamed up for heads-up displays .... basically a lot of scrolling digital numbers ... that met nothing to a pilot.

the faa atc had a number of modernization projects in the late 80s and 90s. they were extensively specified, reviewed and used ada for implementation language. at least in the late 80s, they started with basic premise that faults could be masked by redundancy and system recovery procedures. a problem was that there were some number of domain-specific "faults" later identified that could only be recognized by domain smarts in the atc "application" code .... and it was difficult to retrofit fault recognition/recovery to the application level.

this is somewhat my long time assertion that taking a straight line, well tested, run-of-the-mill application and turn it into a "service" can take ten times the effort and 4-10 times the code.

related posts about high integrity deployments:
http://www.garlic.com/~lynn/2000.html#10 Taligent
http://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Acient FAA computers???

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Acient FAA computers???
Newsgroups: alt.folklore.computers
Date: Sun, 19 Sep 2004 13:30:52 -0600
little topic drift from another newsgroup thread
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?

random other posts in the same thread (at least one topic familar in this n.g.)
http://www.garlic.com/~lynn/2004l.html#21 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2004l.html#40 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2004l.html#41 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2004l.html#45 "Perfect" or "Provable" security both crypto and non-crypto?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory
Date: Sun, 19 Sep 2004 13:45:08 -0600
"Laconic2" writes:
This is significant. There are probably millions of sites that run things originally designed and written some thirty years ago, and that code arguably has not yet reached "sunset". If it ain't broke (except for bugs), don't fix it. But that doesn't mean that new code should necessarily be written along the same lines.

Looking back to the earlier discussion, the question was about whether biz rules should be coded as data, and enforced in the DBMS, or not. Your earlier comment about efficiency beating elegance 24/7 has to be taken in context.

It depends. If the overhead of doing things "elegantly" is on the order of 20%, and if that can be compensated for by some additional hardware, and if the "elegant" solution buys you something that lowers cost somewhere else, it may be the case that elegance actually beats efficiency.

Having said that, I know some COBOL or BASIC shops that would have run be out of there if I had suggested implementing the rules in the DBMS. In some circumstances, this can be a position well taken. In other circumstances, it's just resistance to change.


amdahl gave a talk in the early 70s at mit ... which included some of the business planning he had used to get funding for his new computer company. at that time, he said they calculated that there was at least $100b in 360 mainframe application software .... and that even if ibm chose to totally walk away from 360s at that moment .... just the existing 360 software application base would keep amdahl in business for at least 30 years.

one of the issues (behind the scenes at the time), there was an ibm project called future systems that was going to completely replace 360 ... and was going to be more different from 360 ... than 360 had been from everything else
http://www.garlic.com/~lynn/submain.html#futuresys

of course future system was killed and never did replace 360s ... and customers have continued to develop traditional 360 based applications.

this was all when legacy just met mainframes.

the '96 m'soft developers conference at moscone ... while there was quite a bit of talk about "internet" ... the constraint refrain in all the sessions was "protecting your investment" .... aka all you legacy "visual basic" developers.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory
Date: Sun, 19 Sep 2004 14:33:50 -0600
mAsterdam writes:
It looks like taxonomy is closely associated with hierarchies. Not really a problem, is it?

a problem is that real world knowledge can quickly become a mesh .... while something like NLM's UMLS of medical knowledge has some hierarchical orginaization ... it also has MeSH organization

mesh 2004 intro
http://www.nlm.nih.gov/mesh/introduction2004.html

umls:
http://www.nlm.nih.gov/research/umls/

umls overview
http://www.nlm.nih.gov/research/umls/about_umls.html

medical subject heading for cataloging (effectively a form of classification)
http://www.nlm.nih.gov/mesh/catpractices2004.html

using a hierarchical view paradigm .... the organization can seem to have specific subject belonging to multiple hierarchies simultaneously, there are also mesh connections that aren't hierarchical.

this definition
http://www.wordiq.com/definition/Taxonomy

conjectures that is how the human mind organizes knowledge .... so possibly hierarchical reflects how the mind works ... even when the actual organization isn't that way.

another reference: taxonomies, categorization, classification, categories, and directories for searching:
http://www.searchtools.com/info/classifiers.html

this raises a classification/cateloging issue when there is inter-species breeding
http://anthro.palomar.edu/animal/animal_2.htm
and references:
http://www.pbs.org/wgbh/evolution/library/05/2/l_052_02.html

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Specifying all biz rules in relational data

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory
Date: Mon, 20 Sep 2004 07:34:44 -0600
there is also a lot of "mesh" (non-hierarchical) stuff in the merged glossary and taxonomy stuff that can be found
http://www.garlic.com/~lynn/index.html#glosnote

there is some hierarchical between concepts and terms .... and some between terms ... but most of the intra-term stuff is arbritrary mesh.

similarly, the ietf rfc index work
http://www.garlic.com/~lynn/rfcietff.htm

has rfcs indexed in multple ways ... even with keywords and some keywords being given somewhat hierarchical structure.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

No visible activity

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: No visible activity
Newsgroups: alt.folklore.computers
Date: Mon, 20 Sep 2004 09:21:17 -0600
"Jack Peacock" writes:
I remember asking our Fortran instructor about it, over an argument about 3rd vs. 4th gen computers. None of us knew the difference between LSI (then defined as 100s transistors on a chip) and VLSI (an incomprehensible 1000s transistors on one chip). Intel was starting to introduce the 8080, but we thought the really amazing breakthrough was the HP handheld calculator (so much for my career as a visionary of the future).

the 370/168 used something like 4 circuits/chip.

the 3033 started out to be the 168 wiring diagram mapped to new technology that was about 20% faster ... but had chips with something like 40 circuits/chip (but only using a "168" subset of each chip).

late in the development cycle, there was big push to redo parts of the design to make better use of "on-chip" operations ... which pushed the 3033 to about 50% faster than the 168 (around 4.5mips instead of 3mips).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Access to AMD 64 bit developer centre

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Access to AMD 64 bit developer centre
Newsgroups: comp.arch
Date: Mon, 20 Sep 2004 17:20:14 -0600
Joe Seigh writes:
Double wide compare and swap is key to several important lock-free algorithms and been around for several decades. It's important for avoiding the ABA problem for some of them. If you don't have them or choose not to use them, then you have to resort to Ravenous Bugblatter* like logic to get around the problem as Microsoft reputedly uses for its SList.

charlie had come up with compare and swap at the science scenter
http://www.garlic.com/~lynn/subtopic.html#545tech

based on a lot of work he was doing in fine grain locking (late 60s) ... and tried to get it into 370 architecture. POK architecture owners came back and said it wasn't possible to justify a multiprocessor-specific instruction for the 370 architecture (already having test&set) ... and that to get it justified, it would be necessary to come up with a non-multiprocessor use for the instruction.

thus was born the description for multi-threaded application use in non-locked regions .... when running on either multiprocessor or non-multiprocessor machines. this was originally included in the 370 prinicple of operations as programming notes associated with the compare&swap instruction(s). the description has since been expanded and moved to the principle of operations appendix.

note that the choice of compare and swap comes from needing a mnemonic that matched charlie's initials (CAS). the mnemonic was slightly changed for inclusion in 370 to CS (compare and swap) and CDS (compare double and swap).

the instructions have since been expanded for 64-bit operation and a new "perform locked operation" has since been added.

esa/390 principle of operations
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

compare and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822

compare double and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.23?SHELF=EZ2HW125&DT=19970613131822

appendix multiprogramming and multiprocessing examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.2?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.3?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.4?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.5?SHELF=EZ2HW125&DT=19970613131822

z/Architecture principles of operations (32 bit, 64 bit)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/CCONTENTS?SHELF=DZ9ZBK03&DN=SA22-7832-03&DT=20040504121320

compare and swap (32 bit, 64 bit)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.28?SHELF=DZ9ZBK03&DT=20040504121320

compare double and swap (32 bit, 64bit)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.29?SHELF=DZ9ZBK03&DT=20040504121320

perform locked operation (lots of details & description)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.107?SHELF=DZ9ZBK03&DT=20040504121320

appendix multiprogramming and multiprocessing examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.2?SHELF=DZ9ZBK03&DT=20040504121320
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.3?SHELF=DZ9ZBK03&DT=20040504121320
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.4?SHELF=DZ9ZBK03&DT=20040504121320
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.5?SHELF=DZ9ZBK03&DT=20040504121320
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.6?SHELF=DZ9ZBK03&DT=20040504121320

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

project athena & compare and swap

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: project athena & compare and swap
Newsgroups: alt.folklore.computers
Date: Tue, 21 Sep 2004 13:21:42 -0600
recent reference to compare&swap instruction in thread in comp.arch
http://www.garlic.com/~lynn/2004l.html#55 Access to AMD 64bit developer centre

charlie had invented compare&swap when he was at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
... and the mnemonic compare&swap was chosen because it corresponds to charlie's initials, CAS.

later during Project Athena ... IBM and DEC had both contributed $25m to MIT for Project Athena. Jerry Saltzer was technical director
http://mit.edu/Saltzer/
http://web.mit.edu/afs/athena.mit.edu/user/other/a/Saltzer/www/bio.html

and he had two asst. directors, one from IBM (Charlie) and one from DEC.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Lock-free algorithms

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Lock-free algorithms
Newsgroups: comp.arch
Date: Wed, 22 Sep 2004 07:33:57 -0600
Joe Seigh writes:
Herlihy did a definition for lock-free. The common usage isn't strictly the same as his. He also defined wait-free and more recently obstruction-free. It used to be that certain algorithms using compare and swap were guaranteed to be lock-free (at least on IBM processors) but with the increasing number processors it's likely that compare and swap based algorithms are merely obstruction-free. You can still probably do lock-free using fetch and op if you can find something in the architecture manuals that lets you determine a maximum bound for the execution time on those instructions.

and instruction retry would usually mess up most such guidelines

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Specifying all biz rules in relational data

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory
Date: Wed, 22 Sep 2004 07:40:33 -0600
"Laconic2" writes:
The reason most young programmers do not learn from the masters is that they do not recognize the masters.

there is also the joke that computer science looses its mind every 4-6 years and starts all over again.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Lock-free algorithms

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Lock-free algorithms
Newsgroups: comp.arch
Date: Wed, 22 Sep 2004 14:27:02 -0600
Emil Naepflein writes:
The instructions themself may have a fixed execution time. But when implementing something with CAS or LL/SC and similar instructions you have to implement a loop until you succeed in storing the new updated value. And depending on the load on this storage cell the loop may be very very long. I have seen starvation of some contenters where it took more than 10 s to go through such a barrier.

way back when ... over 30 years, going on 35 years ... trying to get compare and swap into 370 ... there were a bunch of what-if discussions, in addition to coming up with the programming paradigm for compare&swap in multi-threaded/multi-programming (but not necessarily multiprocessor) environments.

one of the issues was progress and starvation. the discussions were more along the line of serialization than synchronization. one of the issues was what happens in the case of instruction retry ... in a multiprocessor environment ... including case of instruction retry for compare and swap and can predictable results be guaranteed.

370 architecture supposedly allowed for a lot of stuff ... some of which never showed up in reality. one was arbritrary multiprocessor configurations of non-identical cpus .... say mix of 370/145s and a 370/195s, which differed by a factor of 20-30 times in MIP rate .... and can you prevent starvation of the slower processor(s).

so there were some assumptions made about relative time spent in the portion involving serialization (where compare and swap might involve synchronization of the activity needing serialization). the issue was probability of starvation and would code for mitigating starvation make performance worse (because the actual serizliation section is so short and so low probability) or not.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Thu, 23 Sep 2004 11:26:14 -0600
mwojcik@newsguy.com (Michael Wojcik) writes:
That may well be the etymology of the term, but my references (eg Stallings, _Data and Computer Communications_, 4th ed) define a broadband signal as any signal that involves modulating a carrier. And Stallings is usually pretty careful with his definitions; he often includes footnotes that clarify which of his sources agree or disagree. (Generally, he seems to prefer the definitions used in IEEE or ISO standards documents over ones that are merely conven- tional.)

remember the original ibm pc net ... 1mbit(?) and had tv cable type architecture with head-end box ... and really thick cables.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Thu, 23 Sep 2004 14:53:09 -0600
jchausler writes:
The very first machine I used did not as such print a "burst page". At the top of the first page of the output just above the job account information (account and man numbers, time, date, operator's id, allowed time and pages output, etc...) there was a single line of = overprinted with /. There was no "end page". All other systems I worked with at that time printed an entire start page and end page which usually included "ascii art" showing the user id in big print in the center third of the page and repeating lines of something on the top and bottom thirds which stood out.

hasp printed the XXXXs ... and cp67 & vm370 possibly picked it up from hasp.

when we first got the 6670 running ... basically an ibm copier3 with a computer connection (and could print duplex, both sides) .... it printed the separater page from the altnerate drawer ... under the assumption that colored paper would be loaded into the alternate drawer.

since owner info didn't take up much space .... there some extra code added to the 6670 to select a random saying from a 6670 saying file or the ibm jargon file ... somebody has put up an old version of the jargon file at:
http://www.212.net/business/jargon.htm

6670s were spread around bldg. 28, typically in departmental supply rooms.

during one security audit ... the auditors were checking to see if classified printed documents had been left on various 6670s ... on top of one 6670 ... the top output had a separater page that happened to have (randomly) selected the definition of an auditor (from the 6670 "sayings" file which they automatically assumed was left their on purpose for them to find) ... aka from old 6670 file:

Auditors are the people that go in after the war is lost and bayonet the wounded.

Another 6670 story is one (april 1st) weekend, somebody used the 6670 to print out bogus password rules on corporate letterhead and placed a copy in all the corporate bulletin boards in the bldg.

past posting on the april 1st password rules
http://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#53 April Fools Day
http://www.garlic.com/~lynn/2001d.html#62 OT Re: A beautiful morning in AFM.

misc. other 6670 postings:
http://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/2000b.html#29 20th March 2000
http://www.garlic.com/~lynn/2000d.html#81 Coloured IBM DASD
http://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
http://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
http://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
http://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
http://www.garlic.com/~lynn/2002h.html#7 disk write caching (was: ibm icecube -- return of
http://www.garlic.com/~lynn/2002m.html#6 Dumb Question - Hardend Site ?
http://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
http://www.garlic.com/~lynn/2002o.html#24 IBM Selectric as printer
http://www.garlic.com/~lynn/2002o.html#29 6670
http://www.garlic.com/~lynn/2003c.html#43 Early attempts at console humor?
http://www.garlic.com/~lynn/2004c.html#1 Oldest running code
http://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
http://www.garlic.com/~lynn/2004k.html#48 Xah Lee's Unixism

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Some Laws

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Some Laws
Newsgroups: comp.databases.theory
Date: Fri, 24 Sep 2004 07:55:59 -0600
"Marshall Spight" writes:
XML data,

This abomination is the work of Tim Berners-Lee and the w3c, who have set the field of user interface back 15 years and the field of data management back 30. This has nothing to do with Java.


remember that what spawned all this was GML done by "G", "M", and "L"
http://www.garlic.com/~lynn/submain.html#sgml

at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

and then both "G" and "L" transferred out to the west coast .... "L" working on system/r (original relational dbms at sjr)
http://www.garlic.com/~lynn/submain.html#systemr

doing work on blobs in the r-star or star-burst time frame (i.e. system/r follow-ons).

didn't chorafas in "new information technologies" have a quote from somebody at sabre that relational set data management back 20 years.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Actuarial facts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Actuarial facts
Newsgroups: alt.folklore.computers
Date: Fri, 24 Sep 2004 09:15:20 -0600
Morten Reistad writes:
Isn't that what is called "white collar crime" ?

some recent stuff somewhat related to account/identity crimes
http://www.garlic.com/~lynn/aadsm18.htm#27 EMV as identity cards
http://www.garlic.com/~lynn/aadsm18.htm#28 EMV as identity cards
http://www.garlic.com/~lynn/aadsm18.htm#31 EMV as identity cards
http://www.garlic.com/~lynn/aadsm18.htm#32 EMV as identity cards

and some stuff from recent congressional hearing

1) Digital ID Theft is the most pervasive crime in the U.S..

2) There is a significant corporate governance issue related to the apathy of executives to improve cyber-security.

3) Two-factor Authentication is critical for e-business.

3) Money is laudered in Cyberspace at a growing rate and terrorists and criminals alike enjoy the benefits of ID Theft.

testimony ...
http://www.reform.house.gov/tiprc/Hearings/EventSingle.aspx?EventID=1365

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Detergent

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Detergent
Newsgroups: alt.folklore.computers
Date: Fri, 24 Sep 2004 15:09:47 -0600
"Charlie Gibbs" writes:
Alas, maintainability is sliding lower and lower down a manufacturer's list of priorities. The reigning philosophy is that you throw it away and replace it with a new model with all the latest gimmicks. It helps sales and is good for The Economy. :-p

giving difficulty of training people and stocking parts ... making the whole unit a FRU part may be less expensive.

minimum labor charge these days seems to frequently be $100.

connectors that would support sub-assembles can be a noticable additional cost item as well as a significant point of failure ... aka would a connector for a separate sub-assemble have higher probability of failure than the sub-assemble itself. it is possible for overall reliability to be increased if it is manufactured as single piece (resulting in fewer returns).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: computer industry scenairo before the invention of the PC?
Date: Mon, 27 Sep 2004 08:36:07 -0700
Brian Inglis wrote in message news:<996fl01msju1oidlpjq5507mva74c8a6j9@4ax.com>...
Green card was S/360.

and there was a blue card for 360/67

at bitsavers
http://bitsavers.org/pdf/ibm/360/
there is functional characteristics for 360/67
http://bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf

misc. past refs to 360/67 blue card
http://www.garlic.com/~lynn/99.html#11 Old Computers
http://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001.html#69 what is interrupt mask register?
http://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
http://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
http://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
http://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
http://www.garlic.com/~lynn/2003l.html#25 IBM Manuals from the 1940's and 1950's
http://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2004.html#7 Dyadic
http://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters

Lock-free algorithms

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: Mon, 27 Sep 2004 16:14:32 -0700
Subject: Re: Lock-free algorithms
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) wrote in message news:<yf34qljlhu2.fsf@base.gp.example.com>...
Similarly currently the Linux kernel policy is to replace cached disk blocks LIFO even as disk accesses are overwhelmingly FIFO, and so on.

And I remember reading the famous story of the approach to optimizing opcodes at a large manufacturer for their second generation CPU (the architecture team optimized those that had been found to be too slow in the 1st generation, while the compiler team decided not to use them at all).

I suspect that the cases where each of the parties are doing reasonable things but the overall result is not, by happenstance, are amazingly rare; as a rule the story is poor management, and in particular scarily bad technical architecture management, which apparently is a popular skill :-).


a lot of implementations try and approximate LRU ... which then seems to look a little like LIFO ... the problem is that in the pathological case, LRU degenerates to FIFO. I had done a bunch of stuff on this in the late '60s ... and then in the early 70s came up with this slight of hand where something that appeared to otherwise be an LRU-approximation, in the pathological case degenerated to random instead of FIFO.

a predictable pathological case is if you run a LRU algorithm under a LRU algorithm .... the 2nd level algorithm can start to exhibit the appearance of a MRU algorithm to the first level. For instance, it you are running a database caching algorithm in an operating system virtual memory. both the database caching may be LRU approximation and the operating system virtual memory management may be LRU approximation. However, the operating system virtual memory may look at the database cache and select the least recently used page for replacement ... at the same moment the database caching algorithm is selects the same page for replacement as the next to be used.

Lock-free algorithms

Refed: **, - **, - **, - **
From: lynn@garlic.com
Date: Tue, 28 Sep 2004 19:05:00 -0700
Newsgroups: comp.arch
Subject: Re: Lock-free algorithms
pg nh@0409.exp.sabi.co.UK (Peter Grandi) wrote in message news:<yf3r7omjhgc.fsf@base.gp.example.com>...
And I guess you must have been involved in the VM/VS1 or VM/MVS acceleration stuff that tried to prevent double paging...

some discussions about virtual memory under virtual memory implementation from late '60s
http://www.garlic.com/~lynn/2003f.html#4 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation

the issue here isn't so much the replacement algorithm interactions .... but the pathlength overhead .... vm had to faithfully emulate the TLB specification w/o having any hardware assist. the issue is that VM had to maintain shadow pagetables of what the virtual machine was using .... the virtual machine would have a pagetable that mapped ("3rd level") virtual address to the ("2nd level") virtual machine address. VM would then have a shadow of all these tables that mapped the "3rd level" virtual addresses directly to the "1st level" or real address. keeping all those tables correct, invalidated, validated, etc ... took a lot of cpu cycles.

a 2nd separate issue was that i had highly optimized the pathlength for the interrupt handler, page replacement algorithm, i/o scheduler, task switcher, etc. ... and could perform a paging operation in about 1/10th the pathlength of VS1. There was actually a choice of fixing real storage and letting VS1 do its own paging (w/o double paging) or slight of hand fix to VS1 to let it believe it had so much real memory it would never page (but in fact, vm was doing the paging underneath) .... i had code in the cp kernel that could do the whole end-to-end operation in about 1/10th the pathlength that vs1 could. the other part of the vm/vs1 handshake ... is to let the VS1 multitasking kernel be notified when VM was doing a paging operation for one of its tasks ... and let VS1 also switch tasks (as opposed to sitting blocked and unexecutable).

the table maintainance pathlength (which could be very large) and being able to do paging operations in 1/10th the pathlength ..... were independent of the issue of conflicts between what happens when VS1 uses LRU to start using a virtual machine page .... at the same time that the VM kernel using LRU has decided to remove the same page from virtual memory.

this was all late '60s and early '70s .... misc. references to long ago SMP stuff
http://www.garlic.com/~lynn/subtopic.html#smp
and misc references to long ago page replacement stuff
http://www.garlic.com/~lynn/subtopic.html#wsclock

note also that the original relational database system, system/r, implementation was vm-based and there was cache issues in a virtual memory environment in the mid to late 70s. misc. references to long ago original relational database stuff
http://www.garlic.com/~lynn/submain.html#systemr

we then did the tech transfer from sjr to endicott for system/r to become sql/ds.

we then did the tech transfer from sjr to endicott for system/r to become sql/ds.

Lock-free algorithms

From: lynn@garlic.com
Newsgroups: comp.arch
Subject: Re: Lock-free algorithms
Date: Tue, 28 Sep 2004 19:39:11 -0700
pg nh@0409.exp.sabi.co.UK (Peter Grandi) wrote in message news:<yf3r7omjhgc.fsf@base.gp.example.com>...
And I guess you must have been involved in the VM/VS1 or VM/MVS acceleration stuff that tried to prevent double paging...

oh, and although i could to paging operation in 1/10th the pathlength in vm that it took vs1 .... and although both vm and vs1 would be characterized as having page replacment algorithms that approximated LRU ... i would claim that my implementation did a much better job of selecting pages for replacement .... so there wouldn't be an exact one-to-one correspondance between VS1 picking a virtual machine page for the next replacement at the same time ... VM was selecting the same exact page for removal/replacement .... but there was/is an approximate correlation for groups of pages

somewhat similar to the VS1 work in the early to mid-70s for VS1 under VM .... there was also work on system/r in the mid to late-70s .... also under VM.

IBMism

From: lynn@garlic.com
Date: Wed, 29 Sep 2004 06:52:20 -0700
Newsgroups: bit.listserv.ibm-main
Subject: Re: IBMism
tedmacneil@bell.blackberry.net (Ted MacNEIL) wrote in message news:<942208170-1096413508-cardhu_blackberry.rim.net-6502-@engine94>...
Consider it chucked!

But, my favourite IBMism is the original:

it's not unlike ...

-teD


recent reference to an ibm jargon file
http://www.garlic.com/~lynn/2004l.html#61

that somebody has up at
http://www.212.net/business/jargon.htm

computer industry scenairo before the invention of the PC?

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: computer industry scenairo before the invention of the PC?
Date: Thu, 30 Sep 2004 14:35:21 -0700
Brian Inglis wrote in message news:<ik4ol0lcmv7ejmnthq622ecgbrgpe0ahj8@4ax.com>...
IME those kinds of apps have to be run as high priority started tasks or subsystems and be light on CPU resources otherwise you get the TSO result; the JVM may be too heavy to get good response. Did you try it on zVM for comparison? Fairshare instead of pure priority and I/O driven scheduling often gives better response.

when i first did fair share ... it was as undergraduate ... and it was merged in and shipped in cp/67. it was actually policy driven scheduling ... with the default policy being fairshare.

it was dropped (along with the replacement algorithm work) in the initial conversion from cp/67 to vm/370 ... but i got to put it all back in when they let me do the resource manager:
http://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

random past posts on fair share
http://www.garlic.com/~lynn/subtopic.html#fairshare
random past posts on replacement stuff
http://www.garlic.com/~lynn/subtopic.html#wsclock

i had also done this smp design that was dependent on lots of the stuff in the resource manager ... and they were faced with something of a delima with decision to ship smp support. the resource manager was the first "charged for" kernel software feature. previously, the software pricing model had been that application stuff could be priced ... but kernel stuff was free. the vm resource manager got to be the guinea pig for first priced for kernel software (which met that i got to spend a bunch of time with business people) ... however, the rules were left that kernel software specifically for hardware support was still free.

the problem then was that the kernel SMP support fell into the "free category" but it now had a pre-req on a bunch of code that was priced. the solution was to remove all the code from the resource manager (that was required by the smp support) and include in in the "free" kernel ..... leaving the resource manager with much smaller amount of code that was priced. random past posts
http://www.garlic.com/~lynn/subtopic.html#smp

a history question

From: lynn@garlic.com
Date: Fri, 1 Oct 2004 11:16:14 -0700
Newsgroups: alt.folklore.computers
Subject: Re: a history question
Morten Reistad wrote in message news:<90ndjc.80m2.ln@via.reistad.priv.no>...
The first port of C was to the Interdata; and that must be around 1976. By that time there were literally hundreds, if not thousands, of different fortran implmentations. ISTR Fortran was well ported to other cpu's by around 1957, and usage exploded from there.

for some topic drift ... as an undegraduate, i got to work on project where the 360 channel interface was reverse engineered and a channel card was built for an interdata3 ... and the interdata3 was programmed to emulate an ibm control unit. somebody wrote an article trying to blame us for originating the ibm plug-compatible controller business. random past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Sat, 2 Oct 2004 14:49:44 -0700
Subject: Re: Specifying all biz rules in relational data
gnuoytr@rcn.com (robert) wrote in message news:<da3c2186.0410011737.7015b22c@posting.google.com>...
about 5 or 6 years ago, when XML was just starting to swallow the world's mindshare, Phil (a colleague) noted that he had built programs "with tagged text files" back in the (late, i gather) '60s. he was a DEC guy, so it may have been something they dreamed up. but i remember doing the same sort of thing with stat packages in the '70s. Phil was not impressed.

search engines turn up that "g", "m", and "l" invented gml in 1969.

some random gml/sgml refs
http://www.garlic.com/~lynn/submain.html#sgml

goldfarb's sgml page
http://www.sgmlsource.com/
history page
http://www.sgmlsource.com/history/
'60s history leading up to gml
http://www.sgmlsource.com/history/roots.htm

gml tag processing was added to the document processor that had been done in the 60s for CMS. CP/67 done in the 60s for the 360/67 morphed into vm/370 for 370s ... and CP/67's CMS stayed CMS for vm/370 (although it changed from the Cambridge Monitor System to the Conversational Monitor System). During this period there was some claim that IBM was the 2nd largest publisher in the US.

There was also a cloned cms document processor done by univ. of waterloo that handled (s)gml that is mentioned in this history of html:
http://infomesh.net/html/history/early/

discussion of waterloo's "script"
http://csg.uwaterloo.ca/sdtp/watscr.html

the original rdbms: system/r
http://www.garlic.com/~lynn/submain.html#systemr
was done on vm/370 in the mid-70s

as an aside, there was something of a convention at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
involving self-describing data.

the original cp/67 kernel was fairly heavily instrumened and the information gathered and archived ... just about from the start when it first went operational in the mid-60s. the archived data had self-descirbing formats. this was constantly being referenced over span of 10-15 years and contributed to work at science center in workload profiling, performance modeling, and effectively the ground work for evolving performance work into capacity planning.

it was also used for helping in calibrate and tune the resource manager that i put out in the mid-70s. recent reference to some of the resource manager work:
http://www.garlic.com/~lynn/2004l.html#70

misc benchmarking, profiling and capacity planning references
http://www.garlic.com/~lynn/submain.html#bench

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Sun, 03 Oct 2004 17:32:28 -0600
"Laconic2" writes:
When I first learned HTML, it reminded me for all the world of DEC RUNOFF. Except for the hyperlinks. Those reminded me of pointers in network databases.

runoff was done on ctss. later some of the ctss people went to the 5th floor, 545 tech sq to work on multics, others went to 4th floor, 545 tech sq to the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

runoff from ctss programmer's guide
http://mit.edu/Saltzer/www/publications/AH.9.01.html

mentions runoff done on ctss
http://www.multicians.org/thvv/7094.html
http://web.mit.edu/afs/athena.mit.edu/user/other/a/Saltzer/www/publications/PSN-40.html
http://portal.acm.org/citation.cfm?id=888948

at the science center, stu madnick implemented "script" command for cms which supported runoff-like syntax.

later, after gml was invented at the science center .... gml-tag support was added to the cms script processor.

previous post in this thread (gml, sgml, html, etc):
http://www.garlic.com/~lynn/2004l.html#72

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Sun, 03 Oct 2004 22:59:21 -0600
"Dawn M. Wolthuis" writes:
I didn't google it to research the origins, but I don't think runoff was particularly DEC. I wrote a COBOL text for a course using RUNOFF on a Prime computer in 1981. I thought it was a spinoff from Waterloo script (and I don't know the origin of that either). --dawn

waterloo script was clone of cms' script. cms script was originally done at the science center by madnick and then after gml was invented, gml-tag processing support was added to script.

runoff was originally done for ctss ... some of the people went to 5th floor, 545 tech sq to work on multics ... and some went to science center on 4th floor, 545 tech sq.

there was former ibm (vm-cms) systems engineer from the LA branch office .... did an implementation of newscript for trs80. doing a little search engine ...
http://www.atarimagazines.com/creative/v9n6/70_GEAP_tricks.php
another mention buried in this article
http://www.wsfa.org/journal/j82/b/

this lists various trs80 software
http://web.archive.org/web/20061130230530/http://www.trs-80.com/trs80-sw.htm
and 9/15/1981 pdf copy of newscript 6.1 document
http://www.trs-80.com/cgi-bin/downsoft.cgi?NewScript_(1981)(Prosoft)(pdf).zip
that mentions being done by VM-CMS Consulting Services, Inc.

the editor in newscript has commands that look like the cms editor (and in fact there is section describing the differences from the cms editor). the script commands are the runoff-like, pre-gml commands from the original cp67-cms script.

there is also newscript 7.0 pdf file from 1982
http://www.trs-80.com/cgi-bin/downsoft.cgi?NewScript_v7.0_(1982)(Tesler_Software_Corporation)(pdf).zip

later version from 1984 is called Allwrite!
http://www.trs-80.com/cgi-bin/downsoft.cgi?Allwrite!_(1984)(Tesler_Corp)(PDF).zip

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

NULL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NULL
Newsgroups: comp.databases.theory
Date: Mon, 04 Oct 2004 06:57:05 -0600
"Marshall Spight" writes:
In contrast, SQL's NULL pollutes everything it touches, by its nature, and sometimes even when it doesn't have to, because it's so confusing. Once something UNKNOWN is introduced into an equation, it's likely that every subsequent calculation based on it will also be unknown.

posting on NULLS & 3-value logic
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
http://www.garlic.com/~lynn/2003g.html#41 How to cope with missing values - NULLS?

references Date article from 1992.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Actuarial facts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Actuarial facts
Newsgroups: alt.folklore.computers
Date: Mon, 04 Oct 2004 08:39:24 -0600
CBFalconer writes:
My favorite garage puts wheels on with an impact driver, then goes around all the nuts with a hand wrench to check. He says that is the result of just such an event many years ago.

i have a 10+ old car where each wheel has one anti-theft lugnut that requires a special adapter. i had service done a month ago and two days later, i started getting a sound that was like a rock inside a hubcap ... sure enuf it was one of the anti-theft lugnuts had come off. i can only guess that they only had one adapter and the guy hand hand tightened the nut but didn't have the adapter handy ... and skipped that wheel when he went back. it forced me to check every lugnut on every wheel ... just to be on the safe side.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Tera

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tera
Newsgroups: comp.arch
Date: Mon, 04 Oct 2004 11:50:12 -0600
ejr@cs.berkeley.edu (Edward Jason Riedy) writes:
Typical spin locks are pure overhead for many scientific / engineering calculations. If you lock a lot and obviously need the locks, your algorithm is spending almost all its time in the lock code, not in its actual work. If you don't lock often, then your algorithm is wasting some time (often a good deal) in code that isn't all that necessary.

in earlier eras, smp kernels tended to have global kernel spin lock ... serializing entry to the kernel.

in the mid-70s, i had worked on a smp project where most of the dispatching was dropped into the hardware microcode. when the (software) kernel was needed, it would either interrupt into the kernel (if no other processor was already running the kernel) or queue an interrupt for the kernel and go off and find some other application work to do.

when that project was canceled ... i adopted the infrastructure to a purely software design. small amount of kernel software that corresponding to the microcode features (intial interrupt handling, dispatching, a couple other items) were modified to work with fine-grain locking ... and then a traditional global kernel spin lock was created for serializing the rest of the kernel execution. an extremely lightweight thread implementation was created

when a processor needed kernel service behind the global kernel lock ... if it couldn't get the lock, it would queue one of these lightweight threads and go off and do something else. i originally referred to it as a bounce lock (rather than spin lock).

some data gathered on a purely spin-lock implementation showed something like 10% of total processing was spent in the spin-lock. the bounce lock used almost negligible processing overhead ... and for some benchmarks it showed negative overhead; aka two processor operation was more than twice that of single processor operation. the machines didn't have overly large caches ... which tended to get totally replaced poping back and forth between kernel space and application space. with the bounce lock ... you could gain quite a bit of kernel cache locality with a specific processor spending extended periods in the kernel doing work on behalf of multiple different application spaces.

misc. smp posts
http://www.garlic.com/~lynn/subtopic.html#smp
misc. VAMPS &/or bounce lock posts
http://www.garlic.com/~lynn/submain.html#bounce

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

previous, next, index - home