List of Archived Posts

2004 Newsgroup Postings (08/21 - 09/04)

Correction to Univac 494 description on web site
Losing colonies
Linguistic Determinism
Correction to Univac 494 description on web site
Correction to Univac 494 description on web site
Losing colonies
Losing colonies
A quote from Crypto-Gram
FAST TCP makes dialup faster than broadband?
FAST TCP makes dialup faster than broadband?
Losing colonies
Losing colonies
FAST TCP makes dialup faster than broadband?
FAST TCP makes dialup faster than broadband?
I am an ageing techy, expert on everything. Let me explain the
I am an ageing techy, expert on everything. Let me explain the
FAST TCP makes dialup faster than broadband?
FAST TCP makes dialup faster than broadband?
FAST TCP makes dialup faster than broadband?
FAST TCP makes dialup faster than broadband?
Vintage computers are better than modern crap !
Vintage computers are better than modern crap !
Public key authentication defeats passwd age warning
US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
Timeless Classics of Software Engineering
Timeless Classics of Software Engineering
Timeless Classics of Software Engineering
Vintage computers are better than modern crap !
Vintage computers are better than modern crap !
CDC STAR-100
Internet turns 35, still work in progress
capacity planning: art, science or magic?
Frontiernet insists on being my firewall
US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
August 23, 1957
Big Bertha Thing blogs
Vintage computers are better than modern crap !
Wars against bad things
Adventure
August 23, 1957
Vintage computers are better than modern crap !
Vintage computers are better than modern crap !
65nm news from Intel
Vintage computers are better than modern crap !
Wars against bad things
August 23, 1957
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
slashdot drift on 360 clone by rca
US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
Xah Lee's Unixism
Losing colonies
Xah Lee's Unixism
US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
Xah Lee's Unixism
IBM 360
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Xah Lee's Unixism
Question About VM List

Correction to Univac 494 description on web site

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Correction to Univac 494 description on web site
Newsgroups: alt.folklore.computers
Date: Sat, 21 Aug 2004 16:21:04 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
But I also do remember that, when speaking about the Cray-1, Seymour Cray was quoted as saying - to applause - that although he finally surrendered and switched to a 64 bit word from the 60 bit word of the 6600, he still had the computer's software use octal notation, rather than hexadecimal, when displaying binary information was called for.

Thus, there apparently was some desire not to march to IBM's drum...


wasn't there something about that was all the bits there were ... no parity at all and ibm had 9bit bytes ... 8bit data plus parity bit

later sometime late '80s, i have some recollection of reading the mainframes using 64+16 ecc (detect 16 bit errors, correct 15 bit errors) ... instead of 8+1 (detect 1 bit errors)

... aka what you see, isn't necessarily all there is.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Aug 2004 17:59:26 -0600
Giles Todd writes:
The ability to solve problems is not constrained by language (e.g. psychologists challenge rats to solve problems and the rats frequently succeed, but not even a psychologist would claim that rats can speak). If you suggest that it is then you also need to deal with all the contradictions (some of which I have already suggested in previous posts) that such a hypothesis imposes.

... however, one could claim that the ability to solve problems may be constrained by knowledge (say give a post-doc sub-atomic particle problem to kindergarten kids).

so what, if any portion of knowledge in specific domains, have some language aspect i.e. language being one of the tools for representing knowledge; aka would the incremental difficulty of implementing an array oriented problem in assembler vis-a-vis APL ... result in lower percentage of assembler people correctly solving the problem (especially if the assembler people had never been exposed to array oriented problems in the past).

so there is some assertions in non-knowledge oriented domain spaces that correct and/or appropriate tools can improve the quality of work. in the programming field ... programming languages frequently represent the domain solving tools. is it impossible for people to not solve problems with really inappropriate tools ... and is language a tool

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Linguistic Determinism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linguistic Determinism
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2004 10:44:49 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
The problem, therefore, is not that the Whorf-Sapir hypothesis is absurd on its face - yes, language does influence how one thinks, and the first solution one may attempt for a problem. But if that attempted solution doesn't work, people can still manage to break out of the box, and learn new concepts from Nature. Instead, the trouble is that it is being used to argue for discarding what is obviously not that badly broken.

so that seems to be that (capital) L-D doesn't allow for knowledge outside of learned language ... which then concludes that language can't evolve(?); which in turn would imply that going back enuf years into history ... before language ... it wouldn't have been possible to evolve from a non-language state to a language state(?)

the issue from my original post was can specific computer language influence how one handles complex tasks (as well as the quality of any solutions) .... say a lower case l-d ... rather than upper case L-D.

... for that matter any semanticly meaningful construct ... aid in handling complex tasks ... and/or influence the quality of handling complex tasks. at a much more fundamental level ... i would assert that CAS's invention of compare&swap instruction ... at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
... was a much improved semantic construct for addressing multiprocessor environments
https://www.garlic.com/~lynn/subtopic.html#smp

and back to my original assertion ... the semantic construct(s) related to implicit lengths in conventional c-programming has significantly affected the quality of c-based solutions ... specifically in large part responsible for the enormous increase in buffer overflow related exploits.

so can we evolve the language ... and given the adverse connotation of upper case L-D ... coin a term that connotes the influence that language quality/features has on ability and quality of (possibly domain-specific) problem solving.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Correction to Univac 494 description on web site

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Correction to Univac 494 description on web site
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2004 11:28:56 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Well, IBM mainframes switched from using 8+1 to using 64+8, and this allowed correcting 1 bit errors and detecting 2 bit errors, because it provided a Hamming distance of 4 between valid symbols. Present-day home microcomputers can use this kind of error correction as well with ECC RAM modules, or they can do without, IIRC.

did try search engine for my vaguely remembered mainframe memory reference (64/80 ... the 8/10 ecc ratio scaled directly to 64bits)

didn't find what i vaguely remember ... but found two from ibm reserach ... one from 2002 describing memory (140, 128) ecc ... 128bits data, 12 error correcting code; and earlier one describing (76, 64) and mentioning (78, 64) ... 64bits data and either 12 or 14 bits of error correcting. the latest generation of memory appears to have doubled the error correcting memory unit from 64bits to 128bits ... while keeping the amount of error correctings bits at 12.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Correction to Univac 494 description on web site

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Correction to Univac 494 description on web site
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2004 12:16:16 -0600
the artical mentions (76, 64) and (78, 64) and talkes about g1-g5, this mentions four bit error correct (S4EC/DED)
http://www.research.ibm.com/journal/rd/435/spainhower.html

the article mentioning (140, 128) for z900
http://www.research.ibm.com/journal/rd/464/alves.html

... after this paragraph it talks about (144, 132) design.
The memory (L3) consists of up to four cards per server. Each card has a memory controller. The memory card contains up to eight rows of 144 synchronous DRAM chips. Data is stored into one row at a time, two bits per chip, and is organized as two 144-bit data words. To protect the data, z900 uses a (140, 128) ECC with 128 data bits and 12 check bits. The code corrects any single-bit failure as well as any single-symbol failure (i.e., 2-bit failure within the same chip). Therefore, if a DRAM is completely broken and the bits coming from that chip are unpredictable, the hardware is able to correct the bits and calculate the proper data without replacing the chip. If two of the 72 DRAMs in the same row/same data word are broken, the ECC logic is able to detect errors in the data fetched from these broken chips. Since there are only 140 bits in an ECC data word and there are 144 bits in the bus, the four additional bits are stored in two spare chips. These chips can be used to spare any two of the 70 chips normally used for the data. There are up to 32 spare chips per card as compared to four for G5/G6

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2004 13:59:32 -0600
another possible observation ... is that the majority of the population tend to spend a lot of time within the context of 1) their past experience and 2) the provided semantic tools ... and that the invention of new semantics (like charlie's invention of compare&swap for multiprocessor semantics) is not a frequent and everyday occurance ... although once invented ... it can be adopted by the rest of the population ... making all participants more efficient.

in the buffer overflow case ... one hypothesis is whether the downside costs of the great increase in buffer overflows significantly exceeds any possibly cost benefit of having implicit length semantics.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2004 16:47:52 -0600
somewhat additional drift for semantic meaning/knowledge .... from the buffer overflow thread ... i mentioned looking at the cve database as part of adding additional stuff to the worked i've done on merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote

specificallly from a different thread
https://www.garlic.com/~lynn/2004j.html#58 vintage computers are better than modern crap!

the same tool, i originally started using for maintaining information about the IETF RFC documents and capturing rules about the IETF standards process. i also use it for generating the rfc index
https://www.garlic.com/~lynn/rfcietff.htm

when i first started out with the rules stuff ... it identified a whole slew of RFCs that were listed by STD1 as documenting something in the standards process but also happened to have been obsoleted. this information for a time was carried in section 6.10 of std1 ... but as things were cleaned up ... was eventually dropped. i still generate the information in the index:
https://www.garlic.com/~lynn/rfcietf.htm#obsol

for quite a while ... the obsoletes and updates information has been carried as tagged information ... and in the knowledge tool i use ... full bi-directional relationships are created .... so when i generate the rfc index information .... the listing of which RFCs obsolete/update other RFCs is generated along with the listing of which RFCs are obsoletedby/updatedby is also generated.

several years ago ... a new tag was generated ... "See Also" .... which was used originally primarily to reference RFCs that were part of collections. Since that time, "See Also" tag seems to have changed to being used primarily to reference other forms of an RFC ... i.e. STD
https://www.garlic.com/~lynn/rfcdoc.htm#STDdoc
BCP
https://www.garlic.com/~lynn/rfcdoc.htm#BCPdoc
FYI
https://www.garlic.com/~lynn/rfcdoc.htm#FYIdoc
and RTR
https://www.garlic.com/~lynn/rfcdoc.htm#RTRdoc

In theory, the original "See Also" was strictly symmetrical ... all RFCs in a group/collection, all used See Also for all other RFCs in the same group/collection.

What I put off doing for a long time ... was writing some code that scanned the contents of RFC files .... identifying the References section and extracting the RFC reference information. References would tend to be asymmetrical relationship ... RFCs would have a list of RFCs that they Reference .... but in turn, RFCs would also have a list of RFCs that they were ReferencedBy.

So last week .... somebody at Crypto 2004 (where they had papers and lots of discussions about MD5 and other hashing exploits) asked me if i had a dependency tree of RFCs related to MD5. I didn't, but I quickly created a summary list of all RFCs that mention MD5 ....
https://www.garlic.com/~lynn/2004j.html#56 RFCs that reference MD5

and then started on some awk scripts to scan RFCs ... attempting to recognize "References" sections .... and extracting from those sections ... references to other RFCs. the scripts are in some amount of flux because there are some number of special cases

Another issue is that the original (symmetric) "See Also" semantics hasn't been consistently followed ... to tie-together RFC group/clusters .... and RFCs frequently just reference each other using the asymmetrical References/ReferencedBy relationship.

For MD5 specifically, it gets more convoluted since the early MD5 RFC (1321) is informational and some subsequent standards process RFCs may or may not reference the RFC (even tho they reference MD5 in the body of the RFC). So now there is

1) keyword listing ... i.e. at
https://www.garlic.com/~lynn/rfcietff.htm
and select Term (term->RFC#) in the RFCs listed by section and then "MD5" in the Acronym fastpath. That gives the list of RFCs that had "md2", "md4", "md5", and/or "message digest" in the RFC title and/or the RFC abstract.

2) the list of all RFCs that condtain the characters "md5" anywhere

3) the summary for RFC 1321 which lists all RFCs that it is referenced by (some, but not all of the RFCs that mention MD5 ... list 1321 in the reference list, for example rfc 2440, OpenPGP doesn't list 1321 as a reference).

And for another twist ... there is a recent collection of RFCs on S/Mime, CMS, and some other security issues; 3850, 3851, 3852, 3853, 3854, 3855, 3859, 3860, 3861, 3862 and 3863
https://www.garlic.com/~lynn/rfcidx12.htm

3852 obsoletes 3369; 3852 also references 3851

3853 references 3369 (obsoleted by 3852) and 3851 (but doesn't reference 3852 which obsoletes 3369)

3859 and 3860 references 3852

3851 and 3850 reference each other ... which in theory would imply a "See Also" symmetrical relationship.

=======

so an obvious rule ... would be to convert situations where two RFCs mutually reference each other ... from a References/ReferencedBy asymmetrical relationship to a SeeAlso symmetrical relationship. Similarly to 3850/3851 mutual referencing ... also 1320/1321 mutually reference (i.e. 1321/md5 references 1320/md4 and 1320/md4 references 1321/md5).

a less obvious rule would involve converting References to an obsoleted RFC ... to the RFC that obsoleted it.

and to the question from crypto 2004 last week .... other than RFCs that have the string "md5" ... there are no obvious list of IETF standards that are dependent on MD5 (which might need reviewing based on talks/papers at last weeks crypto 2004).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A quote from Crypto-Gram

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A quote from Crypto-Gram
Newsgroups: sci.crypt
Date: Sun, 22 Aug 2004 17:54:04 -0600
"Stephen Sprunk" writes:
Why? Intel has the same feature, much more conveniently documented for those who wish to abuse the feature. People have also hacked Transmeta's code morphing engine, and that is much easier to abuse. I wouldn't be surprised if VIA and others producing mass-market chips didn't have the same mechanism as AMD and Intel.

You're screwed no matter who you buy from.


aka introduce "copy" chip into gray market channels ... that has somewhat the look and feel of more familiar chip ... but w/o any of the integrity features.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Mon, 23 Aug 2004 10:40:36 -0600
Craig Partridge writes:
Indeed, the usual problem on a dialup link is TCP trying to drive them too hard (rather than underutilizing them), causing high loss. [Consider that with a round-trip delay of 250ms, at 56Kb, you want a TCP window size of about 16KB -- most systems come with the TCP window size configured larger).

i think that the same month that the slow-start presentation was made at IETF meeting ... there was a paper in acm sigcomm that showed that window-based congestion flow control was non-stable in real-world environments. one problem was that acks could bunch on return ... resulting in effectively opening the whole window ... and then closing it again ... when one objective was to trying and achieve a controlled dribble across the rount-trip-delay. a packet drop scenario is window opens completely up .... sending a slew of packets ... and intermediate hops get hit with multiple back-to-back packets (one objective of congestion control could be considered to spead out packet arrival at various nodes along the way ... instead of having high packet arrival bursts).

the scenario at that time was some sort of rate-based pacing ... which could be implemented as explicit control over the inter-packet transmission interval (i.e. which would tend to explicitly control back-to-back packet arrival at intermediate nodes). possibly one of the issues at the time ... was a lot of the low-end machines had relatively privmitive timer facilities ... making explicit control of inter-packet transmission intervals difficult.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Mon, 23 Aug 2004 10:52:52 -0600
.... oh yes, some past threads mentioning fast tcp

https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003j.html#10 20th anv. of 1000th node on internal network
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003l.html#42 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#15 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004g.html#8 network history

.... and for something a little different ... i finally am adding references to my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

recent post discussing some of the issues
https://www.garlic.com/~lynn/2004k.html#6

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Mon, 23 Aug 2004 10:59:32 -0600
"Charlie Gibbs" writes:
But it won't necessarily be adopted. During one conversion from a Univac 9400 to a 90/30 I added a BXLE instruction (which the 9400 didn't support) to one of the programs. I came in one morning to find assembler reference manuals laid open all over a desk, which was surrounded by people who were trying to figure out exactly what this newfangled instruction did. These people didn't appreciate my disturbing their precious spaghetti code - especially since one of them, now the programming manager, originally wrote it years before (before he was mercifully moved up to a safe distance away from working code).

in my youth ... i would do various kinds of performance optimizations ... one was implementing features in no instructions ... i would re-arrange a whole bunch of other stuff so that the feature would automagically happen as side-effect of other things being done in specific way.

in a couple cases, ten years later ... i would get call that some kernel fix had stopped things working (and possibly the next release can't ship) ... the problem was that some feature was occurring as an "implicit" side-effect of other things. it wasn't just a matter of figuring out what a new instruction did ... it was trying to figure out how it worked when there were no instructions being executed at all.

one of the lessons learned was that implicit ... may be purely local optimization and usually turns out to always be bad in the long term.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Mon, 23 Aug 2004 14:23:21 -0600
Brian Inglis writes:
I was not stereotyping the French, far from it, just pointing out that maybe they're just as self-conscious about using other languages as the rest of us, and it's not arrogance.

long ago and far away ... i was attending a business meeting in paris ... and the person making the presentation was from the south of France ... he was interrupted every couple minutes by somebody from Paris to have his French corrected.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Tue, 24 Aug 2004 10:02:10 -0600
Craig Partridge writes:
Pacing has its own problems -- subtle errors in round-trip time estimation can cause you to pace the packets too widely or too closely and each causes woes.

It is known that pacing during the initial slow start allows you to open the window much faster.


but rate-based pacing can be simply translated into inter-packet transmission delay ... and instead of using window count/size as slow-start mechanism ... vary the inter-packet transmission delay. the transmission activity is then a lot more stable ... since it is insensitive to vaguries of things like ACK patterns.

you don't even have to take into account the round-trip time estimation ... any more than you have to take in the round-trip time estimation for window count/size pacing.

if you analyse congestion as packet arrival ... window size/count only indirectly controls packet arrival ... and because it only indirectly controls packet arrival (other than upper limit an maximum that would happen at one time) ... there tends to be a great deal more variance ... leading to less predictable behavior. translating rate-based pacing into inter-packet transmission delay can much more predictably control packet arrival (which has much higher correlation with congestion) because it has much more predictable control on packet production.

including round-trip time estimation ... might be useful for initial guess for slow-start. however one source of woes is conflicting control objectives ... aka 1) maximizing propagation delay masking against 2) congestion control.

i contend that (proper) rate-based pacing mapped into inter-packet transmission delay can be designed to perform no worse than window packet/size implementation ... if the weighting of the control objectives are the same ... and typically should be much better since the packet production rate will be more stable.

one of the woes is possibly since round-trip time estimation is a time value ... and a rate-based pacing implementation with inter-packet transmission delay is also a time value ... that some advantage can be taken because they are both time values.

the problem is the round-trip time estimation is a time value that isn't associated with congestion. so a dynamic adaptive feed-back control algorithm that is attempting to compensate for congestion ... is in trouble if it gives too much weight to characteristic that has nothing to do with congestion ... aka possibly under the mistaken belief that because the congestion mechanism has switched from max. window size to a time-based paradigm ... and that round-trip time is also a time-based paradigm ... that the dynamic adaptive feed-back control algorithm give undo consideration to things that have nothing to do with congestion.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Tue, 24 Aug 2004 15:50:32 -0600
so the original ack&window model came from end-to-end links .... where the receiving node had some number of buffers ... and the link was potentially faster than the ability of the receiving node to process the packets. the receiving number pre-allocated some number of packet buffers ... and ack'ed when a packet had been successfully received and processed (freeing up the packet buffer) for additional incoming packets. this tended to have the effect of packet buffers being idle for rount-trip-time. some optimization could be done by the receiving node on ack'ing before the buffer was cleared ... to minimuze idle buffer time.

moving into complex store&forward network with intermediate nodes ... it was possible if the packets arrived at the intermediate nodes ... the buffers would be overrun, packets would be dropped and there would be congestion. the intermediate nodes had no immediate way of acking to pace packets to available buffers. however a congestion queueing model could be used to represent congestion and non-congestion situations. congestion occurred when the packet arrival rate exceeded the intermediate rate of processing &/or forwarding. in a no-congestion scenario ... clients could send packets as fast as the media would handle them. as intermediate node congestion increased, the objective would be to slow down the client packet transmit rate. One way of profiling that is by increasing inter-packet transmission delay ... from zero for no congestion ... to some arbritrarily large value ... depending on the congestion level at intermediate nodes.

the issue in such an intermediate node congestion queueing model ... it is totally independent of anything to do with individual client round-trip-times .... it is solely related to how fast the clients are injecting packets into the network.

so modifications of ack&window model was attempted to addressing the intermediate node congestiom problem. the direct (intermediate node congestion) problem is how fast is the client injecting packets into the network ... and in the ack&window model ... the first problem appears is that the client gets to inject a full window's worth of packets immediately (as fast as media accepts the bits). right off the bat, this has high probability of saturating an intermediate node (in the congestion model) ... unless it is purely the no-congestion case and packets can be injected into the network as fast as the media will take it. So one of the things that slow-start can do is eliminate the startup problem of a injecting a full window worth of packets ... as fast as the media will take them. So, if you open up the window ... it isn't directly controlling the inter-packet injection interval into the network (which is the direct problem that is contributing to intermediate node congestion) ... just the total number of bits in any round-trip-time (and the client round-trip-time can be totally unrelated to the packet transmit/arrival rate causing intermediate node congestion). So the magic in slow-start is to try and coerce ACK (and keep fingers crossed) arrival interval back to the client to have some uniform distribution (and somewhat related to round-trip-time and the number of outstanding ACKs).

so a couple ACK down-sides

1) there is no mechanism that directly guarantees uniform arrival intervals of ACKs back to the client ... i.e. can slow-start avoid the startup congestion problem inherent with window/ack paradigm ... and then can it subsequently achieve any sort of steady-state interval between ACKs arriving back to the client. In theory, ACKs can have a relatively random return intervals back to the client .. in part because there is nothing directly controlling ACK return intevals.

2) the claim for intermediate node congestion model is that the packet arrival rate at the intermediate node leading to congestion is independent of any client's round-trip-time. in theory, the best control that a purely window/ack can achieve is an interval between acks equal to the round-trip-time. however, there is nothing preventing intermediate node congestion requiring client packet transmittion rates to be less than one per round-trip-time (which can't be done with purely ACK based implementation).

there is an ACK up-side

ACK return intervals can be pretty random ... which may tend to average to round-trip-interval divided by packet-window-size. the randomness of the return can be less than client packet transmission interval and result in client transmitting back-to-back packets resulting in congestion. however, bursty congestion spikes can result in temporarily stopping ACK returns ... which stops client packet transmission. This would be considered a faster dynamic adaptive backoff mechanism (comparable to almost immediately seeing simultaneous transmission collisions on enet). A purely rate-based pacing implementation might possibly continue to transmit for several packets (continuing to contribute to congestion spike) after a pure ACK implementation would have stopped.

so a dynamic adaptive rate-based pacing algorithm could use a number of early ACKs to possibly slowly reduce the inter-packet transmission delay ... but would quickly increase inter-packet transmission delay if it saw late ACKs.

Part of the issue from control theory standpoint is that ACK frequency can be affected by a lot of factors (like round-trip-time) which can have absolutely nothing to do with intermediate node congestion and packet arrival rates (at intermediate nodes). The one possible possitive correlation with ACKs and congestion ... is that busrty spike congestion can cause late ACK arrival ... so a rate-based pacing algorithm might take late ACK arrival as part of feedback consideration.

If you take the model of intermediate node congestion being caused by aggregate packet arrival rate ... then any client round-trip-time can be totally unrelated to any intermediate node packet arrival rate. then with an objective of slowing down the aggregate packet arrival rate at intermediate nodes by slowing down the individual client packet transmission rates. The possible values of client packet transmission rates to achieve this (unrelated to round-trip-times) can even be less than one packet per round-trip-time (or interpacket transmission delay greater than round-trip-time) ... which can't occur in the traditional window/ack paradigm.

An only slightly related analogy is that the back-off times for ethernet collisions doesn't have a lot to do with ethernet transmission elapsed times.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain the

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the
Middle East
Newsgroups: alt.folklore.computers
Date: Tue, 24 Aug 2004 17:46:18 -0600
"Charlie Gibbs" writes:
In 1976, Robert L. Glass, writing under the pseudonym Miles Benson, published the best of a series of columns he had written for Computerworld. "The Universal Elixir and Other Computing Projects Which Failed" is a collection of amusing, interesting, and, ultimately, educational stories. The final section is titled "Thoughts on Failure Itself", and goes into the philosophical aspects of failure.

There's something terribly wrong with a society that refuses to allow people to fail. That's a lot of learning that never gets done, and unfortunately we're starting to see the consequences.


it may even be worse ... i've run across a saying that if you have never failed ... then it is only because you've never attempted anything. the implication that it not only inhibits learning ... but in fact, can stifle progress and change. this may be a more interesting societal objective as opposed to people failing. once long ago and far away, i was advised that they" would have forgiven me for being wrong but they could never forgive me for being right.

which then reminds me of boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

some number of retellings of a boyd story about why large organizations became more & more rigid, less adaptable and less agile in the 70s and 80s.
https://www.garlic.com/~lynn/2001d.html#45 A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001m.html#16 mainframe question
https://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
https://www.garlic.com/~lynn/2003c.html#65 Dijkstra on "The End of Computing Science"

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

I am an ageing techy, expert on everything. Let me explain the

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the Middle East
Newsgroups: alt.folklore.computers
Date: Tue, 24 Aug 2004 20:44:26 -0600
Stanford Business School: Studying business successes without also looking at failures tends to create a misleading or entirely wrong picture of what it takes to succeed. A faculty member examines undersampling of failure and finds companies that fail often do the same things as companies that succeed.


http://www.newswise.com/articles/view/506559/

with respect to another point in the above article ... I was once told that all successful startups in silicon valley shared (at least) one thing in common ... they had all changed their business plan at least once since starting .... implying adaptability and agility may be one of the most important characteristics.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 25 Aug 2004 11:45:48 -0600
Craig Partridge writes:
That's not true.

There's plenty of evidence that the size of the queue in the intermediate node is a function of the round-trip times of the end systems.

For one, intuitive, way to think about it. The end systems can only learn about the congestion state of their respective paths by measuring the experience of their packets during the round-trip (or by some other mechanism that collects data from each hop in their path -- same results, as the last hop is still a full-rtt to get to and back).

So each end system has a reactive time that is, at its fastest, tied to the RTT. Thus, the experience of the intermediate system is tied to the RTT (unless you sample less often).


so I would assert that may mean that first order arrival rate of packets at any intermediate node is independent of the RTT .... however, per my comment about "ACK up-side" ... i.e. sending nodes will re-act and stop sending packets sooner due to late ACKs ... than if it is based on pure rate-based pacing. in fact, you may be able to improve on long full-rtt by statistically tracking late ACKs ... and increasing inter-packet transmit delays ... as there is a statistical increase in late ACKs. You still don't actually have to know what the RTT is ... you just have know that packets are being transmitted at a certain rate ... and the ACKs should be returning at approximately the same rate ... and if there is a burp in the return rate ... it would be indicative of congestion spike. this would make the production rate of outgoing packets and the arrival rate of incoming ACKs still independent of RTT.

so if an intermediate node could exert direct direct back-pressure .... then the propagation delay of that signal to the packet producer is an issue. so worst case propagation delay for late ACKs is an early intermediate node on the upstream packet side ... all the ACKs already in the stream will arrive at the sending node on "time" and the sending node won't observe the late ACK until something less than full-RTT ... and be able to take corrective action.

So the earliest possible indication of congestion spike at some intermediate node is a late ACK ... which is behind a full stream of ACKs already in the stream. The duration of non-late ACKs preceeding the late ACK is the propagation delay from the intermediate node to the receiving node and then the receiver back to the sender. So RTT is related to the earliest that a sending node can see the earliest indication of a congestion spike and take some corrective action. Then there is the latency of packets already in flight ... and the intermediate node won't see a fall-off because of the sender's reaction to the late ack ... which makes it a full RTT at the intermediate node for senders to adjust to increased congestion because of late ACK indication at the sender.

so the re-action delay of senders to intermediate node congestion based on late ACK indications is a full RTT ... which is the earliest indication of burps in the congestion.

so back to my comment about simple control theory saying that the production rate of packets by senders is independent of the RTT ... and based on calculating the production rate of packets by senders as being dependent on the congestion along the way (and not based on RTT). your comment is that reaction time to change packet production rate-based on congestion along the way .... is based on RTT. However, my original comment is that in a rate-based pacing scenario ... the calculation of the packet production rate should correlate to congestion and that RTT is independent of congestion (other than its affects on reaction time to changes in congestion). So if packet production rate is calculated based purely on congestion ... and if control of packet production rate is achieved via inter-packet transmission delays ... then the inter-packet transmission delays calculation should be based purely on congestion. Is the calculation is based purely on congestion ... then it can't be based on RTT.

The second order effects is that re-action latency to changes in congestion is related to RTT.

So what would control theory have if it was to consider RTT as part of calculating packet production rate (instead of solely considering congestion). It would seem that it would need some predictive measure of downside effects of reaction latency ... and the size of reaction latency.

my original claim is that the cause of congestion is packet production rate ... and that the sending nodes should be calculating their packete production rate-based solely on congestion (and independent of RTT).

Furhtermore that ACK/windowing algorithms try and approximate packet production rate by assuming that it is possible to achieve some sort of homogeneous disitribution of ACKs arrivals back to the sender ... and that by changing the value of number of packets in flight based on RTT ... and there being a logical one-for-one between packets and ACKs ... that RTT/number-of-packets will result in ACKs arriving at approximately that interval. However, the assertion is that RTT divided by number-of-packets is only second order control ... since there is no direct control over the ACK arrival rate back to the sender ... and therefor no direct control of the resulting packet transmission rate.

However, if you use direct rate-based pacing with something like explicit control over inter-packet transmission interval ... you eliminate any calculation involving RTT ... and make the inter-packet transmission interval (and therefor the packet production rate) based solely on congestion ... and independent of RTT.

so RTT does have second order effects on the re-action time in dealing with changes in congestion. So control theory would say that you calculate downside of congestion reaction latency and the probability of congestion spikes ... so how would that manifest itself in calculating the packet production rate? Traditional control theory leaves excess capacity in the system for handling resource spikes when latency re-action is more costly (there are more packets lost of adjusting to congestion spikes than lost to leaving some transmission idle). that translates into decreasing the packet production rate calculation by some percentage ... which would translate into increasing the inter-packet transmission delay. Basically you are looking at the cost of packet loss due to congestion spikes ... the probability of congestion spikes per unit time ... and the nominal congestion re-action delay ... which in aggregate translates into probability of total packet loss because of congestion spikes and reaction delay (where RTT is sort of 2nd order factor). You then somewhat assume that backing off packet production rate, leaving excess capacity can compensate for congestion spikes (and there are fewer packets loss to packet production back-off than there would be to congestion spikes ... and the reaction delay).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 25 Aug 2004 12:08:31 -0600
so we go back to the original premise ... control theory results in packet producers calculating packet production rate solely on congestion. if all the packet producers are calculating packet production rate solely on congestion ... they aren't calculating it based on anything else ... including RTT. If the packet arrival rate at intermediate nodes is based on the packet production rate of the individual senders ... and if all the senders are using congestion as the sole factor in calculating packet production rate ... then to first order appoximation, the packet arrival rate at intermediate nodes is based on sender packet production rate ... packet production rate is independent of RTT ... then intermediate node packet arrival rate is independent of RTT.

so in real life, intermediate nodes are seeing traffic spikes which require packet production rate adjustments at the senders ... and that RTT affects the reaction time.

one of the original statements that w/o direct control over ACK arrival intervals ... ACK arrival patterns may result in opening up a large portion of a window at a single moment, resulting in several back-to-back packets being transmitted ... which would could appear as transient congestion at intermediate node ... requiring re-active adjustments. I believe this is one of the non-stable scenarios in early papers.

so one conjecture is explicit control of packet production rate may significant mitigate the occurance of such transient congestions (and also mitigate needing to be aware of RTT adjustment delay affects to transient changes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 25 Aug 2004 12:22:04 -0600
and small footnote

control algorithms will tend to adjust the rate of change proportional to the feedback latency (driving cars, martian rovers, etc).

while congestion may be the sole factor used in producers calculating their packet production rate (independent of RTT) ... the feedback latency is proportional to RTT ... so the frequency that increases in packet production rate is done should be proportional to the RTT feedback interval. So the rate of packet production is solely based on congestion and independent of RTT ... but the rate of increases in packet production can be proportional to the RTT feedback interval.

however, as per previous ... conjecture about non-stable nature of ACK-based infrastructures for handling congestion ... may in fact be a significant cause behind spikes in intermediate node congestion ... requiring significant changes in packet production rate by the producers (where RTT becomes significant factor in the reaction delay of the infrastructure).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FAST TCP makes dialup faster than broadband?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FAST TCP makes dialup faster than broadband?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 25 Aug 2004 14:38:50 -0600
and getting really carried away ... some comments on dynamic adaptive feedforward control algorithms ... as well as non-stable negative feedback effects.

if the congestion control mechanism was going to include predictive congestion (as well as past congestion) ... i contend, something with high correlation is number of intermediate nodes ... not RTT elapsed time; aka I could have a direct double-hop satellite link with no intermediate nodes and therefor no intermediate node congestion ... with a significantly higher RTT elapsed time than a 30-hop terrestial path. So a dynamic adaptive feedforward congestion control algorithm could figure in not only past congestion behavior for controlling packet transmit rate ... but also use likely hop-count as a probability indication of possibly future congestion (but not RTT elapsed time as probablity of congestion ... although as stated RTT elapsed time does affect feedback latency ... and therefor should be a factor in control algorithm deciding on rate of change ... as opposed to pure rate). RTT elapsed time may have some correlation with number of intermediate nodes ... but would be considered only a second order effect of predicting congestion potential (since you can actually have very high RTT elapsed time with no intermediate node congestion issues).

so, one of the conjectures about ACK-based being non-stable and possibly even having negative feedback. The assertion is that the transmission of multiple back-to-back packets, possibly when a full window opens, can result in a spike in congestion and slow-down at an intermediate node. So the observation is that any spike in congestion and slow-down at an intermediate node can result in some number of ACKs backing up at the congested intermediate node. Then if the congestion decreases ... any backed up ACKs may be released in bunches.

The release of bunched, backed up ACKs (from an intermediate node that saw transient congestion spikes) means that the arrival of bunched ACKS at the sending node will open up multiple packets in the sending window ... resulting in the sending node being able to send multiple back-to-back packets ... which in turn, can result in transient congestion spikes at intermediate nodes. The issue then is that any cause of transient congestion spike at intermediate nodes ... may result in ACK return bunching ... which then results in a lull in packet transmission at the sender ... followed by multiple back-to-back transmission at the sender. The intermediate nodes then could get into negative feed-back, non-stable oscillation of packet transmission lulls followed by packet transmission peaks. From a control algorithm standpoint for ACK characteristic, this non-stable oscillation may continue until eventually packets are dropped and sending nodes go into some sort of back-off ... until the congestion builds up and it starts the cycle all over again.

There are some second order effects as number of intermeidate nodes increase ... random perturbation in large number of intermediate nodes in the same path ... can have a dampening effect on any strong negative feedback oscillations caused by ACK bunching. On the other hand ... unless the random perturbations are really significant ... multiple intermediate nodes might result in amplifying the negative feedback oscillations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Thu, 26 Aug 2004 08:56:02 -0600
jmfbahciv writes:
[puzzled emoticon here] It's a law that all buffers are going to be too small. Part of the programming job is to include this eventuality.

when we were working with this small client/server startup in menlo park that wanted to do payment transactions on the server ... one of the things we mentioned was that it can take ten times the effort and 4-10 times the programming to take a well crafted and well tested application and turn it into a service.

misc. stuff about this thing now called e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

other posts about effort for application -> service
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Thu, 26 Aug 2004 10:22:38 -0600
Alan Balmer writes:
You don't, but you can put it on a short, visible cable, or not allow it to be used. You can't prevent a braindead operator from pasting his password on the wall, either. The solution is still the same - security is not something that is magically implanted in a computer system by installing some wonderful program. It's an entire process, involving the system, its peripherals, its connectivity, and the people who use it. So, if it's insecure, don't do it.

Anyway, this is getting to be quite a stretch from the issue of printing more of a user task's buffer than expected.


a lot of security has to do with authentication. the long time scenario has been something you know authentication; possibly really shared-secrets (like passwords) ... but may be something simple like mother's maiden name.

the shared-secret password scenario for a long time has been that each security domain wanted unique password (different passwords at your employer, your bank, and your local operation in a garage ISP). part of the problem is that each security domain appeared to think they were the only-one ... discounting the fact that a person now may have to manage scores of unique passwords ... which are difficult to remember and possibly change every month. that has led to problems like people having to write down their passwords and possibly store them close to where they might use them. old joke
https://www.garlic.com/~lynn/2001d.html#52 A beautiful morning in AFM

another issue is that possibly 1/3rd of exploits involve social engineering .... getting people to give-up information that they shouldn't (like passwords and/or other authentication code).

one of the issues of going to two-factor authentication ... say some form of hardware tokens ... is it makes social engineering task more difficult since it now involves convincing somebody to give up a physical object ... in addition to just information.

... there is this old saying about there not being any truely hard technical problems ... i.e. all the really hard problems typically arise from some form of people issues.

random passed social engineering posts
https://www.garlic.com/~lynn/aadsm3.htm#cstech10 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss8 KISS for PKIX
https://www.garlic.com/~lynn/aadsm8.htm#softpki3 Software for PKI
https://www.garlic.com/~lynn/99.html#235 Attacks on a PKI
https://www.garlic.com/~lynn/aadsm14.htm#9 "Marginot Web" (SSL, payments, etc)
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm14.htm#33 An attack on paypal
https://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has conspicuously failed to fix
https://www.garlic.com/~lynn/aadsm16.htm#2 Electronic Safety and Soundness: Securing Finance in a New Age
https://www.garlic.com/~lynn/aadsm16.htm#7 The Digital Insider: Backdoor Trojans ... fyi
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda
https://www.garlic.com/~lynn/aadsm17.htm#58 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/2002g.html#82 Future architecture
https://www.garlic.com/~lynn/2002j.html#40 Beginner question on Security
https://www.garlic.com/~lynn/2003i.html#1 Two-factor authentication with SSH?
https://www.garlic.com/~lynn/2003j.html#8 A Dark Day
https://www.garlic.com/~lynn/2003l.html#2 S/360 Engineering Changes
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003m.html#51 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003o.html#9 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#50 Pub/priv key security
https://www.garlic.com/~lynn/2003o.html#57 Pub/priv key security
https://www.garlic.com/~lynn/2004.html#30 Threat of running a web server?
https://www.garlic.com/~lynn/2004f.html#31 MITM attacks
https://www.garlic.com/~lynn/2004j.html#37 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#38 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public key authentication defeats passwd age warning

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public key authentication defeats passwd age warning.
Newsgroups: comp.security.ssh
Date: Fri, 27 Aug 2004 09:31:05 -0600
dtucker@dodgy.net.au (Darren Tucker) writes:
It would be possible to generate the messages for non-password auths too but I'm not sure it makes sense. If you're not using the password at all, is it relevant that it's expired? And what do you do if it is? Deny the login even though the credentials used for the authentication (ie the public key) are perfectly fine? Or generate a message of the form "your password expired X days ago"?

the basic premise of passwords is that they are shared-secrets and vulnerable to all sorts of attacks ... where obtaining knowledge of the passwords can lead to exploits. password expires supposedly bounds the duration of any such exploits.

public keys can be substituted in place of passwords ... put them in the table entry and identify digital signature as the method of authentication (instead of password compare). if the system authentication file supported that ... then ssh could stuff the public key there in lieu of password ... instead of needing its own table.

since knowledge of public key isn't subject to the same sort of exploits as passwords ... the requirement for frequent changes is severely mitigated as means of bounding (undiscovered) exploits.

there are however possibly two completely different issues here

1) confusing that digital credentials are equivalent to public key. public keys can easily be done like ssh ... which just registers the authentication material ... however, it registers a public key for performing authentication instead of registering a password for authentication & maintains the same business process. because of the vaguries of current "password" oriented authentication files ... ssh resorts to its own registration file. note that this is a completely different business process for authentication material than the typical digital credential business process. while the ssh-like method preserves the business process for the relying party to register the authentication material ... the digital credential model originated so that the registration process could be outsourced to 3rd parties.

2) there are still exploits on public/private key that are compariable to password-based infrastructure exploits. part of the expiry process is to bound the life of a specific password exploit (because of its vulnerability to being learned). a lot of the public key credentialling infrastructures distract attention by focusing on the strength of public key operations and/or the credentialling registration process. however, a direct equivalent to attackers acquiring password is attackers acquiring the private key. if you were looking at the use of expiring as a method of bounding an exploit ... you would compare the probability for an attacker to obtain a password (via all possible mechanisms) compared to the difficulty that it might take an attacker to obtain a private key (via all possible mechanisms). If you determined that it was four times harder for an attacker to obtain a user's private key ... you might then choose to make the validity period for a registered public key ... four times that of a registered password (however, if it was a 20 times harder for an attacker to obtain a private key ... you might make a public key validity period 20 times longer). A simple password harvesting attack might be to obtain the file where the user stores all their passwords. A possibly equivalent attack on private key is to obtain the file where the user stores their private key(s)). If the ease of performing such attack is roughly equivalent and such attack represents major vulnerability ... then it could be that the expiration of a password and the expiration of a public key would be similar.

one possible issue in ssh vis-a-vis password ... if major password exploit is evesdropping the authentication process ... and password authentication is no longer used (because ssh is using digital signature) ... then a major factor in needing password expiring for bounding exploits is eliminated (and the artifact that ssh registration of public key for authentication hasn't been integrated into overall system infrastructure of registration business process of authentication material). however, if the major vulnerability is acquisition of client stored files containing authentication material (either password or private key) ... then there could be a need for more consistent expiring for both passwords and private keys.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
 ASCII,Invento
Newsgroups: alt.folklore.computers
Date: Fri, 27 Aug 2004 13:24:12 -0600
mwojcik@newsguy.com (Michael Wojcik) writes:
And as for DEC, Prime, DG, and so forth: nothing lasts forever, and it's quite a stretch to blame their demise on state taxes without rather strong evidence.

i've constantly attributed the big rise in minicomputers during (at least) the late 70s and early 80s ... being the cost of computing drop below some threshold, opening up a huge new market .... and that market started moving to large workstation servers and large PCs in the mid-80s. since that market segment appeared to have opened up in the late 70s because of price sensitivity ... it wouldn't be a surprise that it continued to be somewhat price sensitive. this was possibly the start of customers ordering systems in multiple hundreds at a time.

the cp67/vm370 development group split off from the science center and then when they outgrew available space in 545 tech sq, the group moved out to the old SBC bldg. in burlington mall (sbc having been transferred to cdc).

in the mid-70s, the group was told that there would be no more vm/370 product (for customers), that burlington mall was being shutdown ... and that everybody had to move to POK to work on the internal-only vmtool in support of mvs/xa development ... which saw some number of the vm/370 development group going to dec and prime (i don't remember any going to dg).

misc. past posts about departmental servers and that market segment explosion.
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#4 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#7 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#34 ...killer PC's
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#13 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#24 Tools -vs- Utility
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004d.html#56 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
https://www.garlic.com/~lynn/2004j.html#37 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Timeless Classics of Software Engineering

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Timeless Classics of Software Engineering
Newsgroups: comp.arch,comp.databases.theory,comp.software-eng,comp.lang.c++,comp.ai.philosophy
Date: Fri, 27 Aug 2004 14:24:46 -0600
eric.nospam.hamilton@hp.com (Eric Hamilton) writes:
Thanks for a stimulating topic.

I heartily agree that Mythical Man Month is essential reading for anyone who wants to understand large scale software projects.

The other essential on my book case is Lakos' "Large Scale C++ Software Design". It's applicable to any language and has enough rationale that's grounded in real development practices and the problems of large scale projects that I think it's relevant to the original topic.

A few years ago, I happened to reread Brooks and wrote up a collection his insights that resonated with me. I've attached it below in hopes of whetting the appetite of anyone who hasn't already read it and as a reminder for those who haven't reread it recently. I encourage everyone to (re)read the full book.


one of boyd's observation about general US large corporations starting at least in the 70s was rigid, non-agile, non-adaptable operations. he traced it back to training a lot of young people received in ww2 as how to operate large efforts (who were starting to come into positions of authority) ... and he contrasted it to Guderian and the blitzgreig.

Guderian had a large body of highly skilled and experienced people ... who he outlined general strategic objectives and left the tactical decisions to the person on the spot .... he supposedly proclaimed verbal orders only ... in the theory that the auditors going around after the fact would not find a paper trail to blame anybody when battle execution had glitches. the theory was the trade-off of letting experierenced people on the spot feel free to make decisions w/o repercusions, more than offset any possibility that they might make mistakes.

boyd contrasted this with the much less experienced american army with few really experienced people which was structured for heavy top-down direction (to take advantage of skill scarcity) ... the rigid top-down direction with little local autonomy would rely on logistics and managing huge resource advantage (in some cases 10:1).

part of the issue is that rigid, top-down operations is used to manage large pools of unskilled resources. on the other hand, rigid top-down operations can negate any advantage of skilled resource pool (since they will typically be prevented from exercising their own judgement).

so in the Guderian scenario .... you are able to lay out strategic objectives and then allow a great deal of autonomy in achieving tactical objectives (given a sufficent skill pool and clear strategic direction).

random boyd refs:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Timeless Classics of Software Engineering

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Timeless Classics of Software Engineering
Newsgroups: comp.arch,comp.databases.theory,comp.software-eng,comp.lang.c++,comp.ai.philosophy
Date: Fri, 27 Aug 2004 15:10:59 -0600
eric.nospam.hamilton@hp.com (Eric Hamilton) writes:
Harlan Mills proposed "surgical team" approach. [Not applicable everywhere.]

Conceptual integrity: - Analogy to architectural unity of Reims cathedral vs. others that were "improved" inconsistently.

- "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas."

- The purpose of a programming system is to make a computer easy to use. [We may modify purpose to be to make it easy to do the things that our customers need done.] - Ratio of function to conceptual complexity is the ultimate test of system design. - For a given level of function, that system is best in which one can specify things with the most simplicity and straightforwardness.


i was at a talk that harlan gave at the 1970 se symposium ... that year it was held in DC (which was easy for harlan since he was local in fsd) ... close to the river on the virginia side (marriott? near a bridge ... I have recollections of playing hooky one day and walking across the bridge to the smithsonian).

is was all about super programmer and librarian .... i think the super programmer was re-action to the large low-skilled hordes ... and the librarian was to take some of administrative load of the super programmer.

i remember years later somebody explaining that managers tended to spend 90% of their time with the 10% least productive people ... and that 90% of the work was frequently done by the 10% most productive people; it was unlikely that anything that a manager did was going to significantly improve the 10% least productive members .... however if they spent 90% of their time helping remove obstacles from the 10% most productive ... and even if that only improved things by 10% ... that would be the most benefical thing that they could do. This was sort of the librarian analogy from harlan ... that managers weren't there to tell the high skilled people what to do ... managers were to facilitate and remove obstacles from their most productive people.

this is somewhat more consistant with one of boyd's talks on the organic design for command and control.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Timeless Classics of Software Engineering

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Timeless Classics of Software Engineering
Newsgroups: comp.arch,comp.databases.theory,comp.software-eng,comp.lang.c++,comp.ai.philosophy
Date: Fri, 27 Aug 2004 20:00:22 -0600
Anne & Lynn Wheeler writes:
i was at a talk that harlan gave at the 1970 se symposium ... that year it was held in DC (which was easy for harlan since he was local in fsd) ... close to the river on the virginia side (marriott? near a bridge ... I have recollections of playing hooky one day and walking across the bridge to the smithsonian).

this marriott has bug'ed my memory across some period of posts
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#24 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#25 How many Megaflops and when?
https://www.garlic.com/~lynn/2000c.html#64 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001h.html#48 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2002i.html#49 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002q.html#51 windows office xp
https://www.garlic.com/~lynn/2003g.html#2 Share in DC: was somethin' else
https://www.garlic.com/~lynn/2003k.html#40 Share lunch/dinner?
https://www.garlic.com/~lynn/2004k.html#25 Timeless Classics of Software Engineering

so doing some searching ... this is a picture of approx. what i remember
https://web.archive.org/web/20051120001704/http://www.hostmarriott.com:80/ourcompany/timeline_twin.asp?page=timeline

this lists a ieee conference at twin bridge marriott, washington dc in '69
http://www.ecs.umass.edu/temp/GRSS_History/Sect6_1.html

this lists first marriott motor hotel, twin bridges, washington dc
https://web.archive.org/web/20060303132716/http://www.hrm.uh.edu:80/?PageID=185

and this has a reference to the site of the former Twin Bridges Marriott having been razed several years ago
http://www.washingtonpost.com/wp-srv/local/counties/arlngton/longterm/wwlive/crystal.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Sat, 28 Aug 2004 07:43:30 -0600
... from not so long ago and far away .... notice that official organizations would be issuing (public) keys
For Immediate Release

Secure NCSA Mosaic Establishes Necessary Framework for Electronic Commerce on the Internet

PALO ALTO, Calif., April 12, 1994 -- Enterprise Integration Technologies (EIT), the National Center for Supercomputing Applications (NCSA) at the University of Illinois and RSA Data Security today announced agreements to jointly develop and distribute a secure version of NCSA Mosaic, the popular point-and-click interface that enables easy access to thousands of multimedia information services on the Internet.

The announcement was made in conjunction with the launch of CommerceNet, a large-scale market trial of electronic commerce on the Internet. Under the agreements, EIT will integrate its Secure-HTTP software with public key cryptography from RSA into NCSA Mosaic Clients and World Wide Web (WWW) servers. WWW is a general-purpose architecture for information retrieval comprised of thousands of computers and servers that is available to anyone on Internet. The enhancements will then be made available to NCSA for widespread public distribution and commercial licensing.

Jay M. Tenenbaum, chief executive officer of EIT, believes secure NCSA Mosaic will help unleash the commercial potential of the Internet by enabling buyers and sellers to meet spontaneously and transact business.

"While NCSA Mosaic makes it possible to browse multimedia catalogs, view product videos, and fill out order forms, there is currently no commercially safe way to consummate a sale," said Tenenbaum. "With public key cryptography, however, one can authenticate the identity of trading partners so that access to sensitive information can be properly accounted for."

This secure version of NCSA Mosaic allows users to affix digital signatures which cannot be repudiated and time stamps to contracts so that they become legally binding and auditable. In addition, sensitive information such as credit card numbers and bid amounts can be securely exchanged under encryption. Together, these capabilities provide the foundation for a broad range of financial services, including the network equivalents of credit and debit cards, letters of credit and checks. In short, such secure WWW software enables all users to safely transact day-to-day business involving even their most valuable information on the Internet.

According to Joseph Hardin, director of the NCSA group that developed NCSA Mosaic, over 50,000 copies of the interface software are being downloaded monthly from NCSA's public server -- with over 300,000 copies to date. Moreover, five companies have signed license agreements with NCSA and announced plans to release commercial products based on NCSA Mosaic.

"This large and rapidly growing installed base represents a vast, untapped marketplace," says Hardin. The availability of a secure version of NCSA Mosaic establishes a valid framework for companies to immediately begin large-scale commerce on the Internet."

Jim Bidzos, president of RSA, sees the agreement as the beginning of a new era in electronic commerce, where companies routinely transact business over public networks.

"RSA is proud to provide the enabling public key software technology and will make it available on a royalty-free basis for inclusion in NCSA's public distribution of NCSA Mosaic," said Bidzos. RSA and EIT will work together to develop attractive licensing programs for commercial use of public key technology in WWW servers."

At the CommerceNet launch, Allan M. Schiffman, chief technical officer of EIT, demonstrated a working prototype of secure NCSA Mosaic, along with a companion product that provides for a secure WWW server. The prototype was implemented using RSA's TIPEM toolkit.

"In integrating public key cryptography into NCSA Mosaic, we took great pains to hide the intricacies and preserve the simplicity and intuitive nature of NCSA Mosaic," explained Schiffman.

Any user that is familiar with NCSA Mosaic should be able to understand and use the software's new security features. Immediately to the left of NCSA's familiar spinning globe icon, a second icon has been inserted that is designed to resemble a piece of yellow paper. When a document is signed, a red seal appears at the bottom of the paper, which the user can click on to see the public key certificates of the signer and issuing agencies. When an arriving document is encrypted, the paper folds into a closed envelope, signifying that its information is hidden from prying eyes. When the user fills out a form containing sensitive information, there is a 'secure send' button that will encrypt it prior to transmission.

Distribution of Public Keys

To effectively employ public-key cryptography, an infrastructure must be created to certify and standardize the usage of public key certificates. CommerceNet will certify public keys on behalf of member companies, and will also authorize third parties such as banks, public agencies, industry consortia to issue keys. Such keys will often serve as credentials, for example, identifying someone as a customer of a bank, with a guaranteed credit line. Significantly, all of the transactions involved in doing routine purchases from a catalog can be accomplished without requiring buyers to obtain public keys. Using only the server's public key, the buyer can authenticate the identity of the seller, and transmit credit card information securely by encrypting it under the seller's public key. Because there are far fewer servers than clients, public key administration issues are greatly simplified.

Easy Access to Strong Security

To successfully combine simplicity of operation and key administration functions with a high level of security that can be accessible to even non-sophisticated users, significant changes were necessary for existing WWW security protocols. EIT developed a new protocol called Secure-HTTP for dealing with a full range of modern cryptographic algorithms and systems in the Web.

Secure-HTTP enables incorporation of a variety of cryptographic standards, including, but not limited to, RSA's PKCS-7, and Internet Privacy Enhanced Mail (PEM), and supports maximal interoperation between clients and servers using different cryptographic algorithms. Cryptosystem and signature system interoperation is particularly useful between U.S. residents and non-U.S. residents, where the non-U.S. residents may have to use weaker 40-bit keys in conjunction with RSA's RC2 (TM) and RC4 (TM) variable keysize ciphers. EIT intends to publish Secure-HTTP as an Internet standard, and work with others in the WWW community to create a standard that will encourage using the Web for a wide variety of commercial transactions.

Availability

EIT will make Secure NCSA Mosaic software available at no charge to CommerceNet members in September and NCSA will incorporate these secure features in future NCSA Mosaic releases.

Enterprise Integration Technologies Corp., of Palo Alto, Calif., (EIT) is an R&D and consulting organization, developing software and services that help companies do business on the Internet. EIT is also project manager of CommerceNet.

The National Center for Supercomputer Applications (NCSA), developer of the Mosaic hypermedia browser based at the University of Illinois in Champaign, Ill., is pursuing a wide variety of software projects aimed at making the Internet more useful and easier to use.

RSA Data Security, Inc., Redwood City, Calif., invented Public Key Cryptography and performs basic research and development in the cryptographic sciences. RSA markets software that facilitates the integration of their technology into applications.

Information on Secure NCSA Mosaic can be obtained by sending e-mail to shttp-info@eit.com.

Press Contact:

Nancy Teater Hamilton Communications Phone: (415) 321-0252 Fax: (415) 327-4660 Internet: nrt@hamilton.com


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Sat, 28 Aug 2004 07:53:09 -0600
and at:
ARPA HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS SYMPOSIUM
March 15-18, 1994
Radisson Plaza Hotel at Mark Center
Alexandria, Virginia

Sponsored by: Advanced Research Projects Agency Computing Systems Technology Office


... there was the following session
INFORMATION INFRASTRUCTURE SERVICES
Chair: Dr. Randy Katz, CSTO

Prof. David Gifford, MIT
Connecting the NII to the U.S. Financial System
A payment system is central to NII commerce, and it would be highly desirable to allow NII commerce to be conducted with existing financial instruments such as demand deposit accounts and credit cards. We will discuss the technical issues and fraud risks of directly connecting the NII to existing banking systems.

Prof. David Patterson, UCBerkeley
"TeraBYTES are more important than TeraFLOPS"
Processors are getting faster while disks getting smaller rather than faster. This talk first describes the results of the RAID project (Redundant Arrays of Inexpensive Disks), which offers much greater performance, capacity, and reliablity from I/O systems. We then look at utilizing small helical scan tapes, such as video tapes, to offer terabytes of storage for the price of ten desktop computers. I believe a factor of 1000 increase in storage capacity available will have a much greater impact on society than a factor of 1000 increase in processing speed for a gaggle of scientists.

Prof. Daniel Duchamp, Columbia
DYNAMIC APPLICATION PARTITIONING
Portable computers can move, with little or no warning, from one network attachment to another.

If the attachments provide substantially different bandwidth, then one design challenge is how to "cut" the overall system into client and server halves. Since the network bandwidth between client and server is highly variable, it is tempting to make the cut variable, so that at different times different partitions -- requiring different bandwidths -- prevail. We describe present techniques and future possibilities for dynamic application partitioning in support of mobile computing.

Dr. Clifford Neuman, USC-ISI
Information, Payments, and Electronic Commerce on the NII
This presentation will describe information and payment services that must be provided to support the provision of for-hire services on the NII.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CDC STAR-100

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC STAR-100
Newsgroups: alt.folklore.computers
Date: Sun, 29 Aug 2004 09:46:10 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Apparently it was James Thornton who designed the STAR; Cray worked on the 8600 instead during the early portion of STAR development before he left.

The Cray-1, unlike the STAR, had vector registers, and stride - according to some sources, but others credit the STAR with scatter-gather capabilities in hardware.


thornton left later than cray ... with some number of other people to form network systems corp.

random past ref:
https://www.garlic.com/~lynn/2002i.html#13 CDC6600 - just how powerful a machine was it?

for lot more drift, about same time as above post, post here in a.f.c. on putting up scan of thornton's book
http://groups.google.com/groups?q=%2Bthornton+%2Bcdc+%2Bscan&hl=en&lr=&ie=UTF-8&selm=d266ec61.0206040856.4527dd4e%40posting.google.com&rnum=3

network systems did high speed and heterogeneous internconnect

long time later, when i had done rfc 1044 support ... i got involved in tuning at cray research .... somewhat in conjunction with hsdt ... hsdt collection (things like rate-based pacing, dynamic adaptive control, optimal thruput, etc)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

the original VPN ... i saw in a guy's house (he worked for nsc) ... it was for secure link to work ... he then introduced it at ietf router working group and it is now called VPN.

random past rfc 1044 posts:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/96.html#14 mainframe tcp/ip
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#50 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2001d.html#63 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#65 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#52 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
https://www.garlic.com/~lynn/2002k.html#31 general networking is: DEC eNet: was Vnet : Unbelievable
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002o.html#51 E-mail from the OS-390 ????
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003c.html#79 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#35 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003i.html#43 A Dark Day
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003n.html#40 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2003p.html#2 History of Computer Network Industry
https://www.garlic.com/~lynn/2004g.html#37 network history

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Internet turns 35, still work in progress

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Internet turns 35, still work in progress
Newsgroups: alt.folklore.computers
Date: Sun, 29 Aug 2004 13:17:13 -0600
Internet turns 35, still work in progress
http://seattlepi.nwsource.com/business/aptech_story.asp?category=1700&slug=Internet%27s%20Birthday

and it was only last year that we had the stuff about internet turning 20 ... with the switch over from arpanet protocol to internetworking protocol on 1/1/83.

and i've repeatedly claimed that the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the arpanet/internet for most of the period; possibly in large part because most of the internal net nodes had gateway like function ... which didn't show up until the 1/1/83 great switch-over ... and help contribute to the internet passing the internal network in number of nodes .... sometime in '85.

misc. past references to 1/1/83 ...
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
https://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#16 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#17 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003i.html#32 A Dark Day
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003l.html#0 One big box vs. many little boxes
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004e.html#30 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004f.html#35 Questions of IP
https://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#8 network history
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004g.html#26 network history
https://www.garlic.com/~lynn/2004g.html#30 network history
https://www.garlic.com/~lynn/2004g.html#31 network history
https://www.garlic.com/~lynn/2004g.html#32 network history
https://www.garlic.com/~lynn/2004g.html#33 network history
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

capacity planning: art, science or magic?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: capacity planning: art, science or magic?
Newsgroups: alt.folklore.computers
Date: Sun, 29 Aug 2004 14:00:58 -0600
Capacity Planning: Art, Science or Magic?
http://itmanagement.earthweb.com/erp/article.php/3400851
recend thread x-posted to ibm-main & a.f.c; history books on the development of capacity planning (SMF and RMF)
https://www.garlic.com/~lynn/2004j.html#53
history books on the development of capacity planning (SMF and RMF)
https://www.garlic.com/~lynn/2004j.html#55

and slightly related: ... using the mathematical technique of regression to predict ...
http://abcnews.go.com/sections/SciTech/WhosCounting/bush_victory_whoscounting_paulos_math_040829-1.html

in the early 70s ... there were sort of three techniques being used at the science center for performance.
1) modeling/simulation
2) instruction sampling
3) multiple regression analysis


very early in cp/67 development ... a cms application was created that was originally called DUSETIMR (delta use time) ... which woke up every 5-10 minutes and went digging around the cp kernel for all sorts of counters, statistics, times, etc and logged the information. many shops ran it and/or something similar and eventually years of operational information was accumulated.

the huge amount of accumulated information across years of operation of large number of different systems was used in both calibrating the "1" modeling work as well as "3" input for the regression calculations.

The logged information was also used for developing workload and system profiles ... which were used in the defintion of synthetic workloads and benchmarks (which then were also calibrated against huge amount of observed data). The combination of the huge amount of observational data, models, synthetic workload and benchmarks also contributed to the emerging defintion of capacity planning and performance predictor. as mentioned before the benchmark and modeling stuff
https://www.garlic.com/~lynn/submain.html#bench

that were part of the extensive calibration of the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and the predictor was made available on HONE as a marketing tool
https://www.garlic.com/~lynn/subtopic.html#hone

the instruction sampling and the regression work was used to identify "hotspots" that would be worth while of performance work ... the instruction sampling for direct instruction hotspots and the regression for functional hotspots.

the regression work was also useful in identifying long term trends (across the years & years of accumulated operation statistics).

the instruction sampling was also used in identitying kernel features to be migrated to microcode in virgul/tully (aka 370 138/148). Endicott came to cambridge with a comment that the machines would have 6k bytes of microcode space for kernel assists. instruction sampling was used to sample instruction location and then identify the highest used 6k bytes of kernel code ... that was approximately translated on byte-for-byte into microcode
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

misc. other mcode posts
https://www.garlic.com/~lynn/submain.html#mcode

a few years ago ... i had an opportunity to use regression to pick out functional hotspot on an extremely large and heavily used application (scores of dedicated large mainframes) that had been studied and optimized to death with instruction sampling ... and regression turned up a functional hotspot accounting for nearly 30% of total utilization ... that wasn't showing up in instruction hotspot analysis.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Frontiernet insists on being my firewall

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Frontiernet insists on being my firewall...
Newsgroups: comp.security.firewalls
Date: Sun, 29 Aug 2004 14:50:18 -0600
ibuprofin@painkiller.example.tld (Moe Trin) writes:
As to "why block pings" - I can think of two reasons. First, it has been abused and there _used_ to be a simple way to kill a windoze box with a single ping (I'm relatively sure that few people are still using versions that were vulnerable). But at least one resent worm/trojan targeting windoze boxes this Spirng was using a ping as a precursor of the attack, and hosts that ignored pings were not being attacked by that _particular_ worm/trojan. Remember, the Internet is not the same place that it was in the early 1990's. When microsoft invented the telephone (or whatever) in August 1995, they introduced 87 Bazzillion people to networking, and 99.999% of those people shouldn't be trying to use something as complicated as a digital watch, nevermind a VCR (which is _still_ blinking '12:00') or a computer.

one might claim that home computer is more complex than driving a car and therefor should have more training and licensing; the internet of the 80s was more like the wild west wagon trails before speed limits, traffic signals, traffic laws, etc.

when we were doing this thing for sever payment transactions and payment gateway with small client/server startup
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we could dictate a number of things like multiple a-record support (for server to gateway interactions) ... recent related post
https://www.garlic.com/~lynn/2004k.html#20

however, the people doing the client were another matter, claiming that things like multiple a-record support was way too advanced.

on the other hand at m'soft developers conference, jan, 1996 at moscone, the tcp/ip developers from redmond that i tracked down all understood readily about multiple a-record support (and said of course their browswer supported it). on the other hand ... the constant repeated refrain/theme at the conference was "protect your investment" (i.e. all the vs/basic programmers weren't going to have to learn something different).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
ASCII,Invento
Newsgroups: alt.folklore.computers
Date: Sun, 29 Aug 2004 18:39:13 -0600
Anne & Lynn Wheeler writes:
i've constantly attributed the big rise in minicomputers during (at least) the late 70s and early 80s ... being the cost of computing drop below some threshold, opening up a huge new market .... and that market started moving to large workstation servers and large PCs in the mid-80s. since that market segment appeared to have opened up in the late 70s because of price sensitivity ... it wouldn't be a surprise that it continued to be somewhat price sensitive. this was possibly the start of customers ordering systems in multiple hundreds at a time.

another indication was that there were at least half dozen to a dozen companies making 4341 clones in the era .... able to run off-the-shelf ibm operating systems.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

August 23, 1957

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: August 23, 1957
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Mon, 30 Aug 2004 09:55:50 -0600
iddw@hotmail.com (Dave Hansen) writes:
Last I heard, ISO Pascal still requires all source code to reside in one file. If true, I would venture to say the Pascal is not suitable for programming anything but student homework problems. For which I concede is quite good.

If not true, I think I still would prefer Modula-2 for real-world (including, but not limited to, machine OS) programming.


long ago and far away i did a port of 50k-60k statement vs/pascal application (multiple source modules) to other platforms. i don't think these other platforms had ever seen a 50k-60k statement pascal program. complicating it, one of the vendors had outsourced their pascal to a organization on the other side of the world.

early mainframe tcp/ip product implementation was also done in vs/pascal ... which i had to deal with when doing rfc1044 support ... recent topic drift:
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100

random past posts mentioning vs/pascal:
https://www.garlic.com/~lynn/2000.html#15 Computer of the century
https://www.garlic.com/~lynn/2001b.html#30 perceived forced conversion from cp/m to ms-dos in late 80's
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#21 TSO alternative
https://www.garlic.com/~lynn/2004c.html#25 More complex operations now a better choice?
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004g.html#27 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Big Bertha Thing blogs

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Big Bertha Thing blogs
Newsgroups: alt.folklore.computers
Date: Mon, 30 Aug 2004 15:46:15 -0600
dgriffi writes:
"Big Bertha" was originally a huge artillery piece produced by Germany during WW2. Since then, the term "Big Bertha" has been applied to other things. What readily pops into mind is a particular model rocket kit made by Estes.

earlier ...

http://www.firstworldwar.com/atoz/bigbertha.htm
Although the name was commonly applied to a whole variety of large-calibre German artillery guns the "Big Bertha" ('Dicke Berta') actually referred to a single siege gun, at that time the world's largest and most powerful.

Produced by the German firm of Krupp the Big Bertha was a 42cm howitzer, model L/14 designed in the aftermath of the Russo-Japanese War of 1904 on behalf of the German Army. It was initially used as a means of (successfully) demolishing the fortress towns of Liege and Namur in August 1914, the war's first month (and subsequently as Antwerp). It was thereafter used to similarly reduce other enemy strong-points as the need arose.


....

http://www.worldwar1.com/heritage/bbertha.htm
Big Bertha was the 420mm (16.5-in.) howitzer used by German forces advancing through Belgium in 1914. They were nicknamed for the Krupp arms works matriarch Bertha Krupp von Bohlen. Transported in pieces, moved by rail and assembled in place, they proved devastating in destroying Belgian forts. They were somewhat less effective against French Forts of sturdier design. The howitzers were also used as siege weapons on the eastern front. By 1917, less accurate due to wear on the barrels and extremely vulnerable to counter battery fire once located, they were phased out of operation. The term "Big Bertha" is sometimes applied to the Krupp manufactured artillery piece of completely different design that shelled Paris in 1918 from the phenomenal range of 75 miles. This later weapon, however, is more commonly known as the "Paris Gun".

....

also
http://www.firstworldwar.com/battles/antwerp.htm
http://www.greatwar.co.uk/westfront/ypsalient/secondypres/prelude/ypbombbertha.htm
http://www.worldwar1.com/arm002.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Tue, 31 Aug 2004 08:06:37 -0600
K Williams writes:
Sure, moons ago I had a PL/I-like construct (IF/THEN/ELSE/ENDIF, WHILE, UNTIL, SELECT/WHEN,...) macro library for M$ MASM. When I took my first C course (20ish years ago) I thought, "wow another macro assembler". Apparently I wasn't the only one to see it. ;-)

in the early 70s, i wrote a pli program that input assembler listing and did code flow construction, register use analysis, and attempted to spit out high-level representation using if/then/else/do-while/etc psuedo language ... and somewhat trying to do goto-less ... slightly related mention of harlen mills presentation i attended in the early 70s (recent comp.arch thread):
https://www.garlic.com/~lynn/2004k.html#24 Timeless Classics of Software Engineering
https://www.garlic.com/~lynn/2004k.html#25 Timeless Classics of Software Engineering
https://www.garlic.com/~lynn/2004k.html#26 Timeless Classics of Software Engineering

on of the problems there were some relatively straight-forward branch constructs from modest assembler source that would nest 10 or more levels deep. i finally had put in limit to not next more than 5-6 levels deep.

i have vague recollection that the slac mods to the h-assembler (late 70s, early 80s) included marco package for if/then/else code (as opposed to if/then/else constructs for conditional code generation) ... but the quick search engine use only turns up references to actual assembler enhancements ... not related macro package.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Wars against bad things

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wars against bad things
Newsgroups: alt.folklore.computers
Date: Tue, 31 Aug 2004 08:54:30 -0600
Peter Flass writes:
But of course these 40K virtual penguins weren't actually *doing* anything.

but they got thru boot .... part of the issue was showing that you could aggregate a lot of relatively low-usage "machines" into single hardware complex and share resources ... while still keeping separation and partitioning. has since somewhat been used for web-hosting where you can do use the aggregation to help with capacity planning.

the customer installation issue was then number & mix of machines with regard to their cpu cycles needed.

there was some talk at recent share meeting that ibm started to really take notice of the linux phenomena after somebody did a report on the number of customer mainframes running it (which appeared to be all new mainframe business ... apparently displacing otherwise non-ibm, non-mainframe business) ... aka it was a customer driven reaction.

at the time, there was some referernce to the test was with vm running in a "test lpar" i.e. the vm system doing the 40k linux test ... was running in a subset of the machine.

some time ago ... a subset of the vm function was migrated into the hardware and referred to as logical partitions (or LPARs). it is similar to function provided by virtual machine operating system ... except they do some optimization ... like partitioning real memory so that LPAR memory is dedicated ... no support for moving pages back&forth to disk. the configuration of lpar is done in the "microcode" or service processor ... specifying the amount of real memory, which i/o devices, and how-many &/or what percentage of processors.

on the real "bare" machine &/or in an partition of a LPAR configurated machine it is then possible to boot a vm (virtual machine) operating system ... which then provides more granular resource allocation (than is nominal used in LPAR operation).

lots of LPAR references:
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002p.html#55 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003c.html#41 How much overhead is "running another MVS LPAR" ?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#47 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adventure

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adventure
Newsgroups: alt.os.multics
Date: Tue, 31 Aug 2004 10:00:40 -0600
David Spencer writes:
Over in Bath you may have used one that came from Cardiff - IIRC Stuart France ported a decadent 500-odd-pointer that had been doing the rounds on SERCnet, and, in a spirit of aesthetic purity and disdain for all things Stuartly, I ported the *proper* 350-point one which I'd got at UEA in the late 70s. However neither of these could be considered the canonical Multics port.

tymshare was sort of down the street from sail ... and they had fortran version running on their pdp ... and then ported fortran to cms and it was available on their main vm time sharing service. i was going to drop by and pick up a copy as part of setting up procedure for them to ship me a monthly tape of everything on the tymshare vmshare computer conferencing system. however (before i got around to stopping by) on 4/11/79 ... somebody at peterlee (uk) got a copy from tymshare via another route and sent it to me via the internal network.

this was 300 points ... with an extra bonus point.

over the next couple months several people made enhancements ... including a PLI port with >500 points.

unfortunately i no longer have copies of any of them.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

August 23, 1957

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: August 23, 1957
Newsgroups: alt.folklore.computers
Date: Tue, 31 Aug 2004 15:11:48 -0600
Charles Shannon Hendrix writes:
Did any CPU architectures ever try to solve atomicity problems by using a transaction system?

similar ... but different ... see this write up
http://www-106.ibm.com/developerworks/eserver/articles/power4_mem.html
on POWER4 and shared memory synchronization

from above:
The PowerPC processor architecture provides several instructions specifically to control the order in which stores perform their changes to memory and, thus, to control the order in which another processor observes the stores; to control the order in which instructions are executed; and for accessing shared storage. These instructions are:

sync Synchronise lwsync Lightweight Sync (a new instruction in POWER4) eieio Enforce In-Order Execution of I/O lwarx Load Word And Reserve ldarx Load Doubleword And Reserve stwcx Store Word Conditional stdcx Store Doubleword Conditional isync Instruction Synchronise


....

the write up then has more instruction detailed descriptions
http://www-106.ibm.com/developerworks/eserver/articles/power4_mem.html#4

long ago and far away, at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

charlie had invented compare and swap (aka term was used because the mnemonic CAS are charlie's initials). got it into 370 architecture ... but one of the conditions was to come up with non-smp justification for the instruction (since the majority of the 370s at the time were uniprocessors). the result was the original programming notes how a multi-threaded, non-privileged application, that was running enabled for interrupts (on either uniprocessor or multiprocessor machine), cold coordinate its operation.

the original romp/rios/power wasn't originally targeted for a unix-like environment .... it was going to be uniprocessor only with protection and privilege checking being enforced by compiler and loader. since there was no privilege levels and no-smp, all multi-threaded applications could disable for interrupts when they needed to serialize.

this assumption was invalidated when the line was re-targeted to unix environment that required privilege/non-privilege domain .... however the original power/rios were still targeted as uniprocessor only environments. w/o a specific serialization instruction, but needing serialization for high-performance, multi-threaded applications, a compare and swap macro was defined ... that actually an interrupt into the supervisor call handler. There were some tightly coded instructions in the interrupt handler that simulated the compare and swap instruction operation (while running disabled for interrupts) ... that then immediately returned to the application.

somerset then started the power/pc ... which was almost a totally different chip line (from rios/power) that was going to support shared memory multiprocessing. this references has alpha/power4 comparison on a number of things
http://www.csm.ornl.gov/~dunigan/sp4/
including shared-memory thread fork/join
http://www.csm.ornl.gov/~dunigan/sp4/#shared

this is evoluation of the original programming notes for compare & swap for multi-threading ... in theory, both multiprocessor and uniprocessor operation (in esa/390); multiprogramming (aka multithreaded) and multiprocessor examples:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?SHELF=EZ2HW125&DT=19970613131822

esa/390 compare and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822
esa/390 compare double and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.23?SHELF=EZ2HW125&DT=19970613131822

for 64-bit z/architecture the compare&swap have been augmented with perform locked operation. ,,, perform locked operation definition
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.107?SHELF=DZ9ZBK03&DT=20040504121320

three compare and swap instructions for 64-bit z/architecture
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.28?SHELF=DZ9ZBK03&DT=20040504121320
three compare double and swap instructions for 64-bit z/architecture
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.29?SHELF=DZ9ZBK03&DT=20040504121320

the multiprogramming and multiprocessing examples from z/architecture printicple of operations
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Tue, 31 Aug 2004 20:12:52 -0600
"Jack Peacock" writes:
Not just "these days". I had a CDC 6000 Compass assembler deck that expanded from <2000 cards to 9000 lines after macros. The extensive macro use added at least 2-3 seconds to the compile time. It was a mark of bad programming if a CDC made a user wait a perceptible amount of time.

At the university I attended (CDC 6400) it was common practice to store all production COBOL accounting programs in source code, compile and link every night for the batch, run the executable, then delete the binaries at the end of the batch. Compile time was so quick it did not add appreciable delay, but the cost savings in disk space (more precious than gold on a Cyber system) was significant.

It was quite a shock when I did a little bit of COBOL work on a then new 370 (been a while, 135?, in 1972). The printer did not warm up immediately after loading the card deck. I thought the machine was broke. Jack Peacock


my first student job was to effectively port 1401 MPIO program, unit record<->tape front-end for 709, to 360/30 (rewritten in 360 assembler). I got to invent/design everything from scratch, interrupt handlers, device drivers, storage allocation, task switch, etc. this gave the university practice in using the 360/30 as a 360 rather than in 1401 hardware emulation mode (in preparation for replacing the 709 with a 360/67).

it eventually grew to about a box of cards.

i then modified it so there were effectively two versions with conditional assembly ... one was the complete stand-alone monitor that took-over and ran the whole machine .... the other used os/360 system facilities. this required the use of five DCB macros (just defined storage fields for i/o devices). You could watch the lights on the 360/30 when it hit the DCB macro ... each taking 5-6 minutes elapsed time to "assemble".

some 370 announce/fcs dates
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone

from above references: IBM S/370-135 71-03 72-05 14 INTERMED. S/370 CPU

370/135 was announced march 71 ... and first customer ship (aka FCS ... first box shipped to a custoemr), May 72.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Wed, 01 Sep 2004 08:49:08 -0600
Brian Inglis writes:
I never found macro assemblers for programs slow; only when doing things that really abuse the facilities, like OS sysgens/builds.

the folklore on original os/360 assembler was that the implementor was told that he only had 256 bytes to implement lookup (code plus data) ... so all tables were on disk ... nothing cached in memory. a dcb macro had thousands of lines ... which generated maybe a 100 lines of storage definitions ... depending on the parameters given the macro.

sometime in the late 60s the assembler was improved.

os/360 sysgens/builds frequently took a shift or two ... "stage1" sysgen was maybe a 100-200 card assembler program; all macros with parameters. the macros didn't expand to code ... they expanded to punch statements which generated "stage2" os/360 jobstream that was mostly utility commands (stage2 punched cards were frequently aroound 2000-3000). "stage1" might run 10-30 minutes. "stage2" took the majority of time ... but there was little or no assembler involved in "stage2" ... mostly iehmove & iebcopy utility operations.

the state-of-the-art was to take dedicated machine time for two shifts ... stage2 if run w/o problems would typically finish in under eight hours ... but there were sporadic glitches that showed up that frequently sent things back to repeat the process from the very start (and the stage1/stage2 sequence appeared to be somewhat viewed as this magical black box operation). the process was frequently baby-sat (at least before june 23rd, 1969) by one or two ibm system engineers along with support staff from the customer.

when i had my first student programming job ... recent post
https://www.garlic.com/~lynn/2004k.html#40 Vintage computers are better than modern crap!

I frequently got the 360/30 from 8am sat. until 8am monday (pulled non-stop 48hr shift). one weekend, i got pre-empted for a couple shifts by a "sysgen". I stuck around writing code ... and watched them restart a couple times from the very beginning. This struck me as odd ... so when they took a break ... i did a print-out of the stage2 card deck ... to see what all the fuss was about.

Eventually i developed a process .... with two objectives ... 1) i could run stage2 jobstream in standard working system and 2) i could re-arrange all the statements in the stage2 process ... so that the order that files were copied to the new system packs would result in optimizing operational disk arm seeks. I got to give some share presentations on this mythical ability to run stage2 in standard job stream and optimization of the resulting system.

random past posts:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#135 sysprog shortage - what questions would you ask?
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000d.html#48 Navy orders supercomputer
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001d.html#48 VTOC position
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001k.html#54 DEC midnight requisition system
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#51 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002d.html#30 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002g.html#1 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2004.html#35 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

65nm news from Intel

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 65nm news from Intel
Newsgroups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel
Date: Wed, 01 Sep 2004 08:54:09 -0600
gaf1234567890@hotmail.com (G) writes:
Every version of Windows based on NT (NT, 2000, XP, Server 2k3, Longhorn, etc) has gotten progressively better at utilizing multiple CPU's. MS keeps tweaking things to a finer level of granularity. So minimally, a single threaded application could still hog 1 CPU, but at least the OS underneath will do it's best to make use of the other CPU.

long ago and far away i was told that the people in beaverton had done quite a bit of the NT smp work ... since all they had was smp (while redmond concentrated on their primary customer base ... which was mostly all non-smp).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Wed, 01 Sep 2004 09:32:52 -0600
Alan Balmer writes:
But the plant manager will thank you for shutting down and going to failsafe instead of just shutting down temperature control on one reactor.

story about fast crash and recovery ... see the ref about 27 crashes in one day (about half way down):
http://www.multicians.org/thvv/360-67.html

in the late 70s there were joke about some efforts in operating system failure isolation and recovery ... the joke was that with the recovery code ... by the time the system actually got around to failing and taking a dump ... it was impossible to diagnose the original problem (recovery as taken as euphemism for covering up the problem repeatedly before getting around to determining that the system really had to fail).

that was somewhat mitigated by lpars in the late '80s ... since systems were really isolated from each other.

of course virtual machine systems have tended to provide failure, integrity, and security isolation for long time.

we got into some more of failure fencing and isolation when we were doing ha/cmp:
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Wars against bad things

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wars against bad things
Newsgroups: alt.folklore.computers
Date: Wed, 01 Sep 2004 12:44:04 -0600
Brian Inglis writes:
VM creation and Linux startup is CPU intensive; the fact they got booted says that there's enough capacity to do a decent amount of work, like run 4,000 active web servers completely isolated in their own Linux instances on one machine. Wonder how Linuxy types will feel about logging on to a VM to log on their system console?

some number have the independent virtual machines with their own range of ip-address ... so they x-term or ssh directly into the target ... with the surrounding vm envelope appearing more like network intermediate nodes

recent x-post from comp.arch
---------------------------------------------------------------------------
SIGMICRO TO PROVIDE FREE ACCESS TO AN S/390 MAINFRAME AND AN IA-64 FOR RESEARCHERS

An important emerging industry trend is that IT resources will increasingly be provided over the internet, possibly replacing in-house IT shops in the end. Along the same trend on the side of academic research, ACM SIGMICRO will be providing to microarchitecture researchers free access to machines with some uniquely interesting architectures. You can do any research you want with these machines, or you can choose to just get a hands-on experience to learn about the architectures, to plan your research. Thanks to generous donations from IBM and HP, an IBM S/390 (zSeries) mainframe (running z/VM which lets you have your own virtual multiprocessor machine(s) with Linux or other guest OS), as well as an HP Itanium system (running Linux), will be available in the second half of this year to SIGMICRO researchers worldwide, over the internet. We encourage you to ask for accounts on these machines on-line on our Web site (described below). Because of the limited number of available accounts, the requests will be subject to a review. There is a benefit to entering your preliminary requests early.

The long-term goal of SIGMICRO is to freely provide hardware and software research resources to deserving microarchitecture researchers worldwide, with the help of the industry.

---------------------------------------------------------------------------
NEW SIGMICRO WEB SITE

SIGMICRO is happy to announce its new Web site on microarchitecture research and education, which was developed through a year-long effort. The goal of our Web site is to advance the state of the art in microprocessor design by: - Disseminating information from academic and industrial sources, - Providing access to computing resources for microarchitecture research, - Encouraging interaction between researchers, educators, and students.

Some of the methods for achieving this include: - A dynamic database that allows members to submit content to the site via Web forms, - An annotated Web bibliography, - A portal system to computing resources the site actively recruits from leading-edge computer companies, and - A set of bulletin boards targeted to specific communities.

Notable among the features in the new Web site is the Purdue University NETCARE system, which gives researchers access to a comprehensive set of architecture research and simulation tools, as well as an opportunity to run these tools over the Web. The SIGMICRO Web site also includes a database of graduating students. Graduate students in microarchitecture- related fields, including advanced compiler areas, are encouraged to add an entry to this database about their own theses, to make the community of their peers (and potential employers) aware of their work.

The new SIGMICRO Web site has been launched at the following URL. We cordially invite you to take a look:

http://www.acm.org/sigmicro

=======================
ABOUT ACM SIGMICRO (http://www.acm.org/sigmicro)

The ACM Special Interest Group on Microarchitectural Research, SIGMICRO, specializes in computer microarchitecture, and especially in features permitting instruction-level parallelism and their related implications on compiler design. For the past 33 years, the annual MICRO conference (co-sponsored by SIGMICRO) has been a key forum for presenting major breakthroughs in computing architecture, and has established itself as the premier conference on instruction level parallelism. The SIGMICRO newsletter is published once a year as the conference proceedings and is included as a benefit of membership in the SIG.

SIGMICRO's long term goals toward furthering the state of the art in the field include:

- Continuing quality improvements to the MICRO conference series
- Becoming a Web resource, for teaching and research fields related to microarchitecture
- Establishing new student awards, to foster interest in leading edge microarchitecture research
- Providing computing research resources to microarchitecture researchers worldwide, with the help of the industry.

ACM SIGMICRO is the professional organization to belong to if you are a microarchitect, microprogrammer, advanced compiler designer, or a researcher/developer of superscalar, pipelined, or fine-grain parallel computer architectures, or if you are interested in learning more about such microarchitecture topics.


Officers
--------
                        Chair: Kemal Ebcioglu, IBM
Vice-Chair: Steve Beaty, Metro State College of Denver
        Secretary/Treasurer: Ed Gehringer, North Carolina State U.

SIGMICRO Web Site Steering Committee
------------------------------------
                       Renato Figueiredo, Purdue U.
Jose Fortes, Purdue U.
                   Ed Gehringer, North Carolina State U.
Augustus Uht, U. of Rhode Island

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

August 23, 1957

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: August 23, 1957
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Wed, 01 Sep 2004 13:00:16 -0600
Brian Inglis writes:
Economies of scale: IBM was the second biggest publisher in the world, which was why they got heavily into DCF, Script, GML, SGML, et. al. to automate the ink flow. They were very slow getting onto CDs and providing bits instead of paper, but that may have been because they didn't have CD writers/ readers for mainframes, and had to get them specced, built to spec, interfaced, and supported.

early on, other than the cp/67 and cms documentation ... the really mainstream ibm manual was the architecture redbook and the principle of operations. principles of operation was subset sections in the architecture redbook and invoking cms script could decide whether the full redbook got printed or just the principles of operation.

this was before "G", "M", and "L" invented GML and it was added to the cms script commnad. DCF later was a specific set of script GML libraries.

almost all totally from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

i noted recently that the ML stuff had sort of come back home .... original science center where all the work originated (was in bldg in the SE corner of tech. sq facing the railroad tracks)
http://www.mapquest.com/maps/map.adp?ovi=1&zoom=9&mapdata=dWsdtzB2EBRz%2fJL34Dzoi3T3AGJalTD%2fp7u5auhsVX5Ggb%2bzS7U9dAnpzyA0%2fJhKdfcLzkIE0j8pW6pYyHLe1%2fZeXuhq1j0%2bv3aHkifl85u9BAog1XxOlDCyvKHJPvS0s54XuEAPlxIJPlBQGKls1TZttZOIJwBqoEAjy1kasEdOD1bE06T48OvqKYwqLUpcDazyfyDpEAqThUJUzo1vmFDIx8GlM4YS1pvqOL9LsiIHUHmQYluDmPWMeI0589A%2b897Iy6CSKnqn1pnPLQ2RyyIncm5G2GLiSyZM0DX8CCQ5OU3cBvjTG7OxIV7Q7FAvwSmb9bhAu8V8dmwTe%2fm4DyQ80O02DvI6e8jenDqYTm%2b2BTfQxDTYGGtp8xZAwaK7WKnr8tZY1ljojroMl4PTfoo2GFA0x2FrQc8cpJeUfIaso3Vfl99eGLeEHNe%2fwkmJH4TRHCI45rHw5H454pcWF%2bE%2bU%2bMhZBeI

and now w3c at 32 vasser
http://www.mapquest.com/maps/map.adp?ovi=1&zoom=9&mapdata=dWsdtzB2EBTiLIjju0xwraK4g14KjoCgewytaYSTIA6xsh9n4dIIQE3tk4x0PSSRDy3h23YrC%2f1wcZmyNOIke3A4SidHgACk2SAyUMTYe%2bWcWcGfrpeWbCaEMapWJcUXwf1YmGoWnULxKVWLeLpJbbw9R0UqphigmFEJM20scQboXIkrlUpma%2f4J8qYQR5%2fI8Nz6bhIFCrAx8cPbrp6i2BNsBbnxz7lBZeGYtfyWkm9rQsXhGA7KS1p9DDWr8YLNz4w0OZ%2bi6SW%2beS7TjJy2vQyGuwyW5UIpFsUddqlQ1WmQkOsbLM%2fNyL2Pl%2by8Wj1C5fPlqxYqpdZyMqaYHU2fqRLzxBSxLf55GL1yDECMDu5zraLMWcGN3BAKdiFUNn8hogjm3e%2fpVL0nQLywAGGIeJtf8TBupbfQgfM8R4rq6qR3HVY8QZAIKQ%3d%3d

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 07:12:21 -0600
rpw3@rpw3.org (Rob Warnock) writes:
Those were also PDP-10 Monitor commands, and probably PDP-6 Monitor before that.

there may have been a little bit of common tracing back to the ctss days ... however recent posting about him using cp/cms at npg when he was writing pl/m
https://www.garlic.com/~lynn/2004h.html#40 Which Monitor Would You Pick????

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 07:14:21 -0600
Brian Inglis writes:
MS has been borrowing code from Unix to create a real OS: TCP/IP; NTFS<-ffs; memory mapped files<-mmap. Shame they keep trying to add their own ideas in too: that must be what causes the crashes!

and unix goes back to multics ... which was on 5th floor, 545 tech sq. while cp/cms was at the science center on 4th floor, 545 tech sq ...
https://www.garlic.com/~lynn/subtopic.html#545tech
and they both go back to ctss

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 09:43:59 -0600
joe writes:

http://www.zippythepinhead.com/

If you're running emacs, you can get a quote from him with M-X yow

Not exactly a typical editor function, agreed. I was feeling a little whimsical at the time.


i once did a random email/usenet signature with zippy/yow ... but i added two other files to it ... and then i had to fix a feature in yow. yow uses a 16bit random number to index a yow file ... it was ok as long as your sayings file was less than 64kbytes. i had to modify yow to handle files larger than 64kbytes ... the "sayings" file used for 6670 separater pages was 167k bytes and the jargon file was 413k bytes ... while a current zippy yow file is 52,800 bytes.

recent reference
https://www.garlic.com/~lynn/2004f.html#48 Random signatures

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 09:55:45 -0600
jmfbahciv writes:
And everybody seems to think that those people never talked to each other. Even boasting about whose is bigger, faster, and longer would transmit new ideas among the bit setters.

and some of them worked jointly/together on ctss ... before some of them going to multics on the 5th floor and others going to the science center on the 4th floor. also the north half of 1st floor, 545 tech sq had a lunch room on the east side and a lounge on the west side ... and if nothing else ... people ran into each other there.

then there is melinda's vm history which has a lot of the ctss, multics, cp/cms early lore .... current copy at:
http://www.leeandmelindavarian.com/Melinda#VMHist

a much earlier version was posted to vmshare computer conference in eight parts and can be found at the vmshare archive site:
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST01&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST02&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST03&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST04&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST05&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST06&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST07&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST08&ft=NOTE

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 10:13:28 -0600
Rupert Pigott writes:
Remember NeXTStep ?

> As for following standards thats just plain sense. Note the Mac OS 10 > / Darwin uses a unix kernel because of all the problems with > interoperabillity OS 9 had with talking to Windows and Unix boxes.

Which I believe is derived from a Mach uKernel... The "UNIX" bits are the FreeBSD userland utilities that surround it.


a cmu effort along with various andrew activities and camelot ... minor recent ref:
https://www.garlic.com/~lynn/2004h.html#42 Interesting read about upcoming K9 processors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 10:19:23 -0600
jmfbahciv writes:
And everybody seems to think that those people never talked to each other. Even boasting about whose is bigger, faster, and longer would transmit new ideas among the bit setters.

some number were co-workers on ctss before some went to 5th floor and multics and others went to science center on the 4th floor. north side of 545 tech sq 1st floor had lunch room on the east side and lounge on west side; besides running into people in the elevator ... there were coffee breaks and lunch in the lunch room and after work in the lounge.

melinda, on her site has historical write up with some early ctss, multics, cp/cms lore:
http://www.leeandmelindavarian.com/Melinda#VMHist

an earlier version was posted in eight parts to vmshare computer conferencing ... vmshare archive:
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST01&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST02&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST03&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST04&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST05&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST06&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST07&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST07&ft=NOTE

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

slashdot drift on 360 clone by rca

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: slashdot drift on 360 clone by rca
Newsgroups: alt.folklore.computers
Date: Thu, 02 Sep 2004 12:04:55 -0600
Build Your Own Blade Server (a little different about clones)
http://it.slashdot.org/it/04/09/02/1424203.shtml?tid=137&tid=136
System/360 - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/System/360

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
ASCII,Invento
Newsgroups: alt.folklore.computers
Date: Thu, 02 Sep 2004 15:15:08 -0600
Greg Menke <gregm-news@toadmail.com> writes:
- In 2001, the 9 companies that make the 50 most often prescribed drugs for seniors spent 45 billion on marketing and 19 billion on R&D. I think there is considerable room for savings in drug costs- if people start getting motivated to put the screws to the drug companies.

the traditional marketing spin is that it generates the demand that given the costs (including the marketing) ... lowers the per unit costs. this is frequently situation where the up-front R&D & infrastructure costs may dominate the per unit production costs.

you see it in microprocessors and automobiles ... where there is not only a significant R&D budget ... but heavy up-front production facility investment. the fabrication plants for newer generation of popular microprocessors seem to be doubling in cost while their expected lifetime seem to be decreasing (larger and larger fixed infrastructure costs and increasing R&D costs having to be amortized over a shorter and shorter period of time).

some parts of the computer industry may actually have the ratio of marketing budget to R&D budget more like 3:1 or 5:1 ... in theory generating the demand that can cover the infrastructure fixed costs.

it is somewhat like the insurance analogy .... you have to spread the premiums across a large enuf base to cover the pay-outs .... aka i have some recollection of reading that half of fed. fema flood insurance payouts year after year go to the same state ... which wouldn't likely be possible if you didn't have the other 49 states contributing to the fund (there was some footnote that it would have been cheaper to pay people in that state to stop building on these things called flood plains ... where it repeatedly floods nearly every year).

so one possible scenario is that the marketing budget triples these costs (2/3rds marketing, 1/3rd R&D) ... but it may increase the demand by a factor of ten. if the price of the drug is heavily weighted to fixed upfront costs, rather than production & distribution .... then cutting the marketing, may cut the demand by a factor of ten ... which could increase the price for specific drugs by a factor of 3-4.

one question would be is it possible to get the upfront investment from the money people if there isn't some follow-thru plan to generate the demand.

the scenario in the us automobile market in the 60s ... i don't remember anybody complaining about the marketing budget ... it was more like enuf money wasn't being spent in other places in the process (possibly going to dividends, salaries, bonuses, etc) ... which seemed to make them more susceptible to foreign competition.

in the drug market ... can two companies go head-to-head ... and the one with a large marketing budget actually take market share away from the company with no marketing at all ... and as a result charge less per unit (because of increased volume and manufacturing scale) ... and be more likely to be viable long term?

so one might claim that large marketing budgets might be indicative of industry with large amount of competition ... and an industry with little or nothing spent on marketing ... might be indicative of little or no competition. so is lot of competition good or bad ... and/or does most marketing have anything to do with competition.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Thu, 02 Sep 2004 15:57:38 -0600
Alan Balmer writes:
The shuttle boosters are 3.7m diameter. Quite a bit larger than the gage of any railroad I've ever seen.

but they did have to be transported from utah to florida ... so while the gauge may not have been issue ... there were things like bridges, tunnels, etc. My understanding was the sectioning was specifically because of length transportation issues.

i have some recollection of competing bids building single unit assemblies at sea coast sites allowing them to be barged to florida. supposedly the shuttle boosters were sectioned specifically because they were being fabricated in utah and there were transportation constraints.

shortly after the disaster ... some magazine carried a story spoof about columbus being told that his ships had to be built in the mountains where the trees grew ... and because of the difficulty of dragging them down to the sea ... they were to be built in sections ... and then tar would be used to hold them together when they were put to sea.

earlier thread on this subject
https://www.garlic.com/~lynn/2001c.html#83 CNN reports...

this has 149 feet long as 12 feet diameter in four sections from utah
http://www.analytictech.com/mb021/shuttle1.htm

... making each section about 40 foot long. 12 foot high and wide on flatbed .... 15-16 high (on flatbed) clears bridges and overpasses and 12 foot wide should hopefully be within bridge width restrictions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Losing colonies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Losing colonies
Newsgroups: alt.folklore.computers
Date: Thu, 02 Sep 2004 20:55:12 -0600
different aspect about language and brain interaction

Brain scans highlight how Chinese language and learning pathways differ from alphabet-based languages like English
http://www.signonsandiego.com/news/science/20040902-0727-chinesedyslexia.html

Dyslexia not the same in every culture
http://www.usatoday.com/news/health/2004-09-02-dyslexia_x.htm

so the article talks about different parts of the brain are used for learning chinese and english ... but ...
It does not mean Chinese dyslexics might be able to use different portions of their brain and learn to read English signs and instructions more easily. Once a person learns to read they tend to use the same circuitry regardless of the second language and its alphabet, Eden said.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 10:45:24 -0600
SM Ryan writes:
It's nice to know people still have time to work on really important things.

was also responsible for adeventure inside the company ... recent pst in a.o.m
https://www.garlic.com/~lynn/2004k.html#38 Adventure

slight reference to the internal network in above
https://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the arguments we used in proposed security sweeps to find all copies ... was that a public entertainment area would be less of a problem than attempting to outright & totally outlaw such activities. for one thing a single (trusted?) entertainment area would use less disk space than lots of disguised (to evade security sweep) individual copies.

random past adventure threads
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#169 Crowther (pre-Woods) "Colossal Cave"
https://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
https://www.garlic.com/~lynn/2000d.html#33 Adventure Games (Was: Navy orders supercomputer)
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2003f.html#46 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003i.html#69 IBM system 370
https://www.garlic.com/~lynn/2003l.html#40 The real history of computer architecture: the short form
https://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#49 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
 ASCII,Invento
Newsgroups: alt.folklore.computers
Date: Fri, 03 Sep 2004 11:48:15 -0600
Anne & Lynn Wheeler writes:
so one might claim that large marketing budgets might be indicative of industry with large amount of competition ... and an industry with little or nothing spent on marketing ... might be indicative of little or no competition. so is lot of competition good or bad ... and/or does most marketing have anything to do with competition.

somewhat more in boyd's terms
https://www.garlic.com/~lynn/subboyd.html#boyd

a static, homogeneous, well-studied environment can have a single central authoritity making uniform decisions for the whole infrastructure more efficiently ... since they presumably would understand global optimization issues; aka you eliminate (internal) competitive forces (and marketing) as superfluous and inefficient.

the issue is in changing &/or non-homogeneous environment .... it isn't going to be well-studied and well-understood for a single central authority to make globally uniform decisions that are either globally or individually more efficient. so in changing &/or non-homogeneous environment ... there will tend to be larger degree of variability as well as competition (and potentially things like marketing). the theory is that for a large infrastructure to be agile and adaptable across a wide-range of conditions ... the operation will tend to be much more distributed with a greater degree of local optimization.

So part of the issue is can more changeable, adaptable, non-uniform operation be more efficient than a uniform, global centralized operation. Part of the trade-off would seem to be whether the environment is relatively static and uniform .... or whether there is a high degree of change and local variability.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 12:09:30 -0600
Alan Balmer writes:
The first disaster was due to (possibly inferior) gaskets and inferior judgment on launch day. The second was falling foam, and inferior realization of the gravity of the problem. I'm not clear on what either had to do with Utah.

at the time of the 1st disaster ... the claim was that the utah bid was the only solution that required manufacturing the boosters in sections for transportion and the subsequent re-assembly in florida with gaskets. the assertion was that none of the other solutions could have had a failure because of gaskets ... because they didn't have gaskets (having been manufactured as a single unit).

so the failure cause scenario went (compared to solutions that didn't require gaskets and manufacturing in sections)

disaster because of inferior(?) gaskets inferior(?) gaskets because of gaskets gaskets because of transportion sectioning requirement transportation sectioning requirement because the sections were manufactured in utah

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360
Newsgroups: bit.listserv.ibm-main
Date: Fri, 03 Sep 2004 13:49:37 -0600
aw288@ibm-main.lst (William Donzelli) writes:
Getting an S/360 running would be a very major project, but not completely unreasonable. There are quite a few folks that have decent sized machines running (mostly DECs), and sometimes, they work with little fiddling around. I don't know of any running S/360s in the United States, but I think this is due to their rarity, power requirements, and generally unpopularity* amongst the ancient computer crowd.

there is the issue of memory technology ... stories of small group of ladies in &/or around kingston that were periodically brought back to thread ferrite cores (long after they had retired) ... because of some critical 360s somewhere.

some quick search engine use looking for ferrite core articles and threading them by hand (mostly passing references)
http://www.eet.com/special/special_issues/millennium/milestones/bobeck.html
http://pages.sbcglobal.net/couperusj/Memory.html
http://www-901.ibm.com/servers/eserver/tw/zseries/download/360revolution_040704.pdf
http://store.cookienest.com/reviews/memories-that-shaped-an-industry-decisions-leading-to-ibm-system-360-id0262661675.php
http://www.ieee.org/organizations/history_center/Singapore/Pugh.html

there were footnotes (in some of the above) that the improved process efficiency that ibm developed in producing ferrite core memories was one of the reasons for 360 success (it could turn out more machines with more ferrite core).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 14:22:55 -0600
Alan Balmer writes:
No, because they were *not* manufactured on the launch pad. Transportation would be required from any other place - in Utah or not.

Even if they were manufactured on the launch pad, there would be more than one piece.


as mentioned in the earlier post ... supposedly all other competing bids were all sites on various shores that all allowed barging of single, completed, manufactured unit to florida w/o sectioning and no other designs had gaskets.

supposedly utah was the *only* bid that required sectioning to meet various overland transportation requirement.

previous post
https://www.garlic.com/~lynn/2004k.html#58

earlier reply to your comment about ... "shuttle boosters are 3.7m diameter" ... with comment about the alternative single unit assemblers being barged to florida.
https://www.garlic.com/~lynn/2004k.html#54

as repeatedly posted ... as far as i know from all the stuff from the period ... the comments were that the utah design was the *only* design that had to be built in sections (because of transportation issues) and re-assembled in florida and the only design that involved such gaskets. all other designs were built on various shores in single pieces and would be barged as single piece to florida and no gaskets were involved (because they were manufactured in single pieces and barged to florida in whole pieces).

the difference between barging and train ... was that there are significantly less length, width, height, dimensional restrictions on barged items compared to dimensional restrictions on overland train .... because of bridges, tunnels, curves, clearances from adjacent traffic, clearances involving any sort of structures near tracks.

i was under the impression that barging was fairly straight forward from east coast, gulf coast, many major rivers, etc. i would guess that anyplace that you could get a ship that was 160' or larger ... you could transport a barged assembly.

in fact, a shipyard that was accustomed to building a ship in a single assemble (w/o needing gaskets to hold it together) could probably also build a single assembly booster rocket ... and barge it to florida.

i'm not sure about how to catalog all the possible sites &/or shipyards that could build single section unit (things like single section ships that are built in single section w/o gaskets to hold the different sections together) ... some quicky google about ports
http://www.aapadirectory.com/cgi-bin/showportprofile.cgi?id=3709&region=US

turns up corpus cristi ... they handle ships built in single sections (w/o gaskets to hold them together) up to 1000 ft long and 45 ft depth. they also mention some docks that are barge use only that only handle 260 ft length and 16 ft depth (course there probably isn't much of height or width restriction with overhanging adjacent structures).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 15:24:46 -0600
Alan Balmer writes:
No, because they were *not* manufactured on the launch pad. Transportation would be required from any other place - in Utah or not.

Even if they were manufactured on the launch pad, there would be more than one piece.


i have vague recollection of a picture of saturn v 1st stage being barged to florida ... having been built someplace in a single assemble ... and not requiring re-assemble in florida with gaskets.

can you imagine it being built in sections that required meeting overland train transportion restrictions? ... not only would it have to be section in 40ft long pieces .... but probably each 40ft section would have to be cut into slivers since it would otherwise have too big/wide .... and then assembled with huge amounts of gaskets in florida ... not only around the circumference but huge amount of gaskets up and down its length.

lets see what search engine comes up with for saturn v 1st stage reference ... aha ... it turns out that wikipedia is your friend
https://en.wikipedia.org/wiki/Saturn_V#Stages

first stage:
https://en.wikipedia.org/wiki/S-IC

is 138ft ... about the same length as the assembled shuttle booster rocket ... but 33ft in diameter. can you imagine the saturn v first stage being built someplace in 40ft sections .... as well as down its length 40ft long section down the length ... sort of like a pie ... say 8ths ... what is the straight line between the end points for 1/8th arc of a 33ft diamter circle ...

the circumference is a little over 103ft so 1/8th of that is about 13ft arc ... which would make the straight line for the end-points of the arc about 12ft .... which might just about fit overland train transportion restrictions. so saturn v first stage could be manufactured in 32 sections ... transported to florida by train and re-assembled with gaskets.

saturn v second stage
https://en.wikipedia.org/wiki/S-II
doesn't give the dimensions ... picture seems to imply about the same circumference but not as long.

saturn v third stage
https://en.wikipedia.org/wiki/S-IVB

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 15:34:39 -0600
Alan Balmer writes:
I'm not a fan of Mr Hatch, but blaming him for the shuttle disaster(s) is somewhat over the top. Why not blame President Bush? That's the popular thing nowadays.

i never made any reference to people or personalities ... somebody else did.

i just repeated the claims after the disaster about majority of the other launch things were single section and barged to the launch site (as well as the alternative booster proposals).

the issue of the gaskets is pretty well established as being required for the sectional manufacturing ... predicated on the dimensional restrictions on overland train transportation ... that was perceived to have been a pretty unique ... when other major deliverables have been built in single section and barged to launch site.

from a purely fucntional standpoint to somebody's leap with regard to personalities ... is somebody else's doing.

i would say that any argument about the personality issues ... shouldn't creap into purely straight forward issue about whether all manufacturing assemblies require sectioning because of transportation restrictions. lots of assemblies are made in single sections and barged to florida.

i can see taking issue with somebody (else) over their possible personality assertions ... but that shouldn't also result in comments about whether sectioning is required for all possible modes of transportation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 16:54:32 -0600
"John Thingstad" writes:
Norton Trikol alto buildt the Titan solid rocket booster along simular lines. I has a resonably good record. A extra gasket was added since it was supposed to be used for human flight. Fron a engeneering stanpoint I can't see how you are supposed to mold solid rocket fuel fot the booster in one piece. But then I am not a rocket scientist. Anyhow space flight is a riscy endevor. If it wasn't the booster then it would have been something else. One in every 50 or so launces will fail. Saying it was as good as murder is prepostrious. The peaple who launced knew the riscs. Sitting attom of 10000 liters of fuel undergoing a controlled explosion will probaly never be entirely safe.

the two spoof stories in the aftermath

1) one about sectioning the boats for columbus because they had to be built in the mountains where the trees grew and then used tar to stick the sections together for the trip across the atlantic. lots of ships were lost at sea for all sorts of reasons ... but hopefully none because the ship was built in sections and tar was used to stick them together.

2) way back when, because a wagon slipped off the trail and down the side of the mountain and people died ... congress decreed that there would be no more travel across the appalachian trail ... hardly consistent with the tens of thousands of traffic deaths each year.

... however, i think your reply is intended possibly for somebody else's post ... not mine.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Fri, 03 Sep 2004 17:43:12 -0600
Anne & Lynn Wheeler writes:
if you choose your road routes carefully enuf ... you can miss a lot of the problems that you would run into moving by train. we had one route where i was on the peak of the house and had to grab wires over the side .... lift the wires up to clear the peak and walk the wires back as the house moved under.

oh ... and people have died doing that ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xah Lee's Unixism

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Newsgroups: comp.lang.perl.misc,alt.folklore.computers,comp.lang.lisp,comp.lang.python,comp.unix.programmer
Date: Sat, 04 Sep 2004 08:58:06 -0600
jmfbahciv writes:
That was my next question :-). How did you manage?

the first i remember was two story with steep roof. i got to demolish the brick chimmny in the middle of the house... and remove the bricks ... lifting the house for the timbers to go under and move, there wouldn't be anything to support the chimney. when the house came to the wires, i went up thru the hole in the roof where the chimney had been; walk out to the edge and gather the wires and lift them above the peek ... and walk with them as the house moved under the wires. i was 11-12. several years later, one of my uncles fell off the roof of a house being moved and died.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Question About VM List

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question About VM List
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l,alt.folklore.computers
Date: Sat, 04 Sep 2004 09:15:29 -0600
sergiomvs@ibm-main.lst (Sergio Lima) writes:
Hello. Listers, What are the name of VM list ? Thanks.

Sergio Lima Costa System Consultant Caixa Economica Federal Sao Paulo - Brasil


VM/ESA and z/VM Discussions <VMESA-L@LISTSERV.UARK.EDU>

appears to be relative standard listserv processor ... evoluation from the one developed for bitnet ... which was sort of a clone of the internal corporate list server originally supporting vmtools (and later pctools when there was a pc) ... which was partially of a fallout of something called tandem memos ... minor reference from old jargon
http://www.212.net/business/jargont.htm

history of listserv, bitnet, misc. other
http://www.lsoft.com/products/listserv-history.asp

other bitnet/earn refs:
https://www.garlic.com/~lynn/subnetwork.html#bitnet

the usenet bit.listserv hierarchy is typically listserv/usenet gateways.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, next, index - home