List of Archived Posts

2001 Newsgroup Postings (04/22 - 05/24)

April Fools Day
Blame it all on Microsoft
Block oriented I/O over IP
"Bootstrap"
Block oriented I/O over IP
SIMTICS
Blame it all on Microsoft
Blame it all on Microsoft
Blame it all on Microsoft
MIP rating on old S/370s
SIMTICS
High Level Language Systems was Re: computer books/authors (Re: FA:
Blame it all on Microsoft
High Level Language Systems was Re: computer books/authors (Re: FA:
Climate, US, Japan & supers query
Blame it all on Microsoft
Pre ARPAnet email?
Pre ARPAnet email?
Pre ARPAnet email?
SIMTICS
Pre ARPAnet email?
High Level Language Systems was Re: computer books/authors (Re: FA:
High Level Language Systems was Re: computer books/authors (Re: FA:
Pre ARPAnet email?
Pre ARPAnet email?
Pre ARPAnet email?
Can I create my own SSL key?
Can I create my own SSL key?
Pre ARPAnet email?
IBM Reference cards.
Pre ARPAnet email?
High Level Language Systems was Re: computer books/authors (Re: FA:
Blame it all on Microsoft
Can I create my own SSL key?
Blame it all on Microsoft
Can I create my own SSL key?
Can I create my own SSL key?
Can I create my own SSL key?
IBM Dress Code, was DEC dress code
Can I create my own SSL key?
Can I create my own SSL key?
Where are IBM z390 SPECint2000 results?
OT: Ever hear of RFC 1149? A geek silliness taken wing
Can I create my own SSL key?
Where are IBM z390 SPECint2000 results?
VM/370 Resource Manager
Can I create my own SSL key?
Where are IBM z390 SPECint2000 results?
Where are IBM z390 SPECint2000 results?
Can I create my own SSL key?
"IP Datagrams on Avian Carriers" tested successfully
OT: Ever hear of RFC 1149? A geek silliness taken wing
Pre ARPAnet email?
Pre ARPAnet email?
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Pre ARPAnet email?
Need explaination of PKI and Kerberos
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Wireless Interference
Design (Was Re: Server found behind drywall)
Estimate JCL overhead
Estimate JCL overhead
Modem "mating calls"
Modem "mating calls"
Design (Was Re: Server found behind drywall)
Stoopidest Hardware Repair Call?
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Estimate JCL overhead
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Modem "mating calls"
line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Stoopidest Hardware Repair Call?
CS instruction, when introducted ?
CS instruction, when introducted ?
Apology to Cloakware (open letter)
Stoopidest Hardware Repair Call?
Apology to Cloakware (open letter)
digital signature and certificates in xml
digital signature and certificates in xml
The Mind of War: John Boyd and American Security
Passwords
The Mind of War: John Boyd and American Security
The Mind of War: John Boyd and American Security

April Fools Day

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: April Fools Day
Newsgroups: alt.folklore.computers
Date: Sun, 22 Apr 2001 20:32:44 GMT
jata@aepiax.net (Julian Thomas) writes:
Yes, I remember this well. Let's see - xxx = HAL <gg>?

at least H worked for them ... and then formed H&L (aka HAL) with a lot of funding from somebody before being totally taken over by them. supposedly one of the reasons for the non-stop MD11 out of San Jose by American was because HAL had standing seat reservations on the plane every week (on the other hand gates were tight at sfo).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Sun, 22 Apr 2001 20:41:35 GMT
Toon Moene writes:
The problem with VMS is not that it's VMS - it's that you spoil your eyes on the microfiche trying to read the source.

there is the joke/story about MVS from the late '70s where somebody wanted to get the exact (microfiche) listings that corresponded to the binaries that they were executing. after the company spent a couple million looking into the opportunity, they finally concluded that there was no way of absolutely guarantee'ing that an exact set of microfiche listings could be created that exactly corresponded to binaries being execuated.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Block oriented I/O over IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Block oriented I/O over IP
Newsgroups: comp.arch
Date: Sun, 22 Apr 2001 20:58:22 GMT
"Stephen Fuld" writes:
IBM's MVS can use a system inherited from older IBM operating systems where a single disk is shared. Unfortunately, locking is at the volume level.

IBM's TPF (formerly called ACP) supports locking at a record level and is very high performance. It uses locking commands in the disk controller.


TPF/ACP started out using the standard IBM disk controller reserve/release commands (from the 360 days). Early to mid 70s, the 3830 disk controller was enhanced (originally called the ACP-RPQ) to support fine-grain logical locks (i.e. software can use nearly any naming convention for the locks ... including convention to achieve record level locking).

For awhile, my wife was in POK responsible for "loosely-coupled" architecture and created Peer-Coupled Shared Data architecture which became the basis for IMS hot-standby and sysplex.

When she and I ran skunk-works that developed HA/CMP ... we supported shared SCSI in "mode1" (mostly 1+1, but possible N+1, N+M), "mode2" (independent operation with either failing over to the other but degradation), and "mode3" (concurrent activity).

I got to the design and the first prototype implementation for the distributed lock manager. One of the benefits was working with some of the DBMS vendors that had versions running on VAX-clusters and at least two of them provided us with "top ten things done wrong in VAX-clusters" and since I had the opportuunity to start from scratch (plus long history with 360 & 370 loosely-coupled configurations) ... that they figured that I shouldn't make the same mistakes.

I believe the semantics for that is still used ... even in the case where at least one of the DBMS vendors has done a generic version that allows it to run on other platforms.

I've even run across relatively recent marketing blurps by other vendors that nearly reads word-for-word the same as some stuff I wrote in the late '80s for HA/CMP.

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

"Bootstrap"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Bootstrap"
Newsgroups: alt.folklore.computers
Date: Mon, 23 Apr 2001 18:21:13 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
2914?

No, later than that - 3xxx series.

3914. (Or maybe 3814? I don't recall if the S/370 flavor of the switch was considered an RPQ (x9xx) or a control unit (x8xx).


the 2914(?) version was typically blue-box with rotory switches on back panel ... something like 8-10(?) rotory switches with possibly 8?-16? positions ... effectively switching channel cables to processors.

i remember the 3914(?) version was beige and displays and programmic input

the disk engineering labs used a number of them for switching test cells between different processors.

i'm on trip ... so don't have references at the moment to cross-check

random refs:
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Block oriented I/O over IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Block oriented I/O over IP
Newsgroups: comp.arch
Date: Mon, 23 Apr 2001 18:44:18 GMT
"Bill Todd" writes:
W.r.t. the HA/CMP DLM, my impression had been that it was developed by CLAM Associates around 1994 (and at least at the interface level was almost a dead-exact clone of the VMS DLM - likely to make OPS happy), so was the DLM you're referring to above a prototype that was its precursor or is my understanding just incorrect?

my wife and I subcontracted out a lot of HA/CMP development work to CLaM starting in '89 when they were just a three person shop of former IBM'ers in Cambridge ( it is actually "C", "L", & "M" ... where "C" at one time had been my wife's manager in G'burg when she worked on FS (future system) advanced I/O.

Some of the semantics were similar to VMS DLM (in part because at least two of the DBMS vendors were adapting their cluster product from VMS to ha/cmp). However there were other pieces of the semantics and various internal pieces that were different specifically because they were on their list of things done "wrong" in vms.

I got to do the original design and prototype ... but CLaM did majority of the actual product.

as per
https://www.garlic.com/~lynn/95.html#13

we had concurrent access running in 91 (we had non-concurrent access with fall-over in '89) ... and was looking at scaling the support during '92.

random refs:
https://www.garlic.com/~lynn/2000c.html#9
https://www.garlic.com/~lynn/2000c.html#56
https://www.garlic.com/~lynn/2000c.html#77

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

SIMTICS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SIMTICS
Newsgroups: alt.os.multics
Date: Tue, 24 Apr 2001 01:13:04 GMT
ehrice@his.com (Edward Rice) writes:
Off-topic for alt.os.multics but on-topic (sort of) in alt.folklore.computers -- Lynn Wheeler has written in some detail about the many-Linux-instances efforts, and if you're interested in that you really should go back and find what he's written. If you grep for his name and "cluster" and "S/390" you'll come up with at least part of it, but the topic has surfaced at least twice.

note that this wasn't a cluster but VM (virtual machine) operating in a single LPAR (aka LPAR is sort of a stripped down flavor of VM running in the microcode of the real hardware). So it was the real processor ... with the microcode set to partition the real hardware into multiple "LPAR" logical machines ... and VM operating system running in one of the LPARs providing multiple virtual machines, in this case, 41,400 virtual machines ... each running a different copy of Linux.

copied from a posting
https://www.garlic.com/~lynn/2000b.html#8

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Wed, 25 Apr 2001 18:30:22 GMT
Kilgallen@eisner.decus.org.nospam (Larry Kilgallen) writes:
Mr Stallman did not invent the concept, he just advocates something patterned after the operating procedures used with the PDP-1 at MIT about 1962 or so. Quite a bit before anything called "XMODEM" I would say.

while not '62 ... I worked with both HASP (starting mid '67) and CP/67 (starting 1/68) which had distributed source. I think that it was between the june 23rd, 1969 announcement (where everything became separately priced ... presumably, at least in part to various gov. activities) and 370 that a lot more attention was paid to software ownership (started seeing copyright statements in source and then later in the '70s the big furor over "OCO" ... object-code-only ... debate that crops up in some newsgroups & mailing lists to this day).

i believe early machines in the '50s may have had freely available source ... but there would have been much less of it. One could contend that gov. activities to make everything separately priced is as much to do with the situation as anything else.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Wed, 25 Apr 2001 21:37:11 GMT
Kilgallen@eisner.decus.org.nospam (Larry Kilgallen) writes:
When I entered into this discussion thread, it was not merely about the availability of source but about the "hacker" ethic of each person putting their changes back into a common source pool, all the time. At the MIT PDP-1 this was accomplished with physical access to the paper tape tray that held popular programs. That closely-knit community was what I was relating, not the mere availability of source.

both HASP and CP/67 had extremely strong user community of people putting source back into the product and being redistributed.

for instance ... one of the things I did as undergraduate on CP/67 was implement all the TTY/ASCII terminal support which was incorporated back into the source and distributed. Tom Van Vleck ... somewhere on the multics "site" has a story about modifying the ASCII support on one of the MIT machines running production service (national urban planning something or other, i believe) and have it crash and re-ipl/boot 26 times in a single day. some comment about one of the drivers behind doing the (new) multics filesystem was to get it so a crash & re-ipl didn't take much of 1st shift.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Wed, 25 Apr 2001 21:40:30 GMT
Michael Lyle writes:
Wow. I would never claim that SNA was superior to TCP/IP, or even NCP. If I recall correctly, SNA was a strict tree topology, with no peer to peer communication possible. A graph (as used by TCP/IP and its ancestors) seems much more robust, scalable and manageable than a tree for communications. I never read much about the specifics of DECnet, so I can't comment, but I certainly felt that it was proprietary and far too complex.

there is strong conjecture that (at least original) structure of SNA, VTAM (the software monitor that ran in the mainframe, and NCP (not the arpanet NCP, but ibm's NCP; ran in the terminal/line mainframe control unit) was largely precipitated by a project that I worked on as an undergraduate that originated the 360 PCM (plug compatible manufactur) mainframe control unit.

random ref:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

MIP rating on old S/370s

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MIP rating on old S/370s
Newsgroups: bit.listserv.ibm-main
Date: Wed, 25 Apr 2001 21:55:16 GMT
jmaynard@CONMICRO.CX (Jay Maynard) writes:
370/168, or perhaps a small 4381...dunno about anything newer. It's definitely faster than a 370/158 and a 4341. (So's my PIII-500 laptop...I can walk around with enough machine to run MVS 3.8 faster than the first 370 I ever worked on for a living. Amazing, these computers...)

some 370 numbers for floating point numbers
https://www.garlic.com/~lynn/2000d.html#0

note that the 158 & the 3031 were supposedly the same processor ... the difference was that the 158 was sharing the engine with both 370 and channel microcode. The 3031 had two dedicated "158" engines ... one exclusive for 370 microcode and one exclusive for channel I/O microcode (even difference on benchmark doing dedicated processing and no i/o).

the later 168-3 supposedly got over 3mips (as opposed to the earlier 168-1).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

SIMTICS

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SIMTICS
Newsgroups: alt.os.multics
Date: Wed, 25 Apr 2001 23:27:41 GMT
Anne & Lynn Wheeler writes:
copied from a posting
https://www.garlic.com/~lynn/2000b.html#8


oops, finger slip

https://www.garlic.com/~lynn/2000b.html#62

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

High Level Language Systems was Re: computer books/authors (Re: FA:

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Language Systems was Re: computer books/authors (Re: FA:
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 26 Apr 2001 17:50:31 GMT
Charles Richmond writes:
Assembly macros can get very complex...the set of macros available from IBM for our IBM 370 was a convoluted mass!!! IMHO, no one really understood what all of it did...

The IBM operating systems like MVS had a very large body of system & library services ... each with their own invokation and most with a corresponding "macro" to access the service. It wasn't so much that there were macros for the sake of having macros .... it was more a case that every system service and library service had one or more macros.

GET/PUT had at least a DCB (data control block) macro that initialized a lot of the I/O parameters, explicit OPEN & CLOSE macros that invoked the file open/close functions and GET/PUT that actually performed the data transfer. READ/WRITE was similar but in addition it would have a ECB macro (event control block) that was data structure for serialization and at least a WAIT for serialization (and possibly POST).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Thu, 26 Apr 2001 18:19:21 GMT
Jan Vorbrueggen writes:
And MVS and VM had two different implementations of RSCS (the basis of BITNET), done by two indepedent groups I believe, which resulted in certain files crashing any MVS machine they happened to traverse when sent via BITNET (a store and forward network). Of course, RSCS is a resilient protocol, so when the other side came back again after the crash, the file was resent...

VM had RSCS (done originally in cambridge) ... HASP originally had some networking code that I believe was originally/partially developed by triangle university (at least some amount of the code in HASP made reference to them). That was ported over into JES2 (when HASP became JES2) ... and was generically labeled NJE (network job entry).

The JES2 NJE implementation shared some similar shortcomings with ARPANET NCP ... networking, line-drivers, etc protocol somewhat all mixed together ... no real "internetworking" support yet. The mixture of different information corresponding to different levels of a "protocol stack" in NJE headers caused them a lot of problems. Different versions of JES2/NJE would frequently not inter-operate ... and there were more than a few instances of where one version of JES2/NJE attempting to process a network file from a differen level of JES2/NJE would bring the whole system crashing.

CPREMOTE/RSCS/VNET (i.e. VM's system which went thru various naming iterations) had much cleaner layered implementation with effectively internetworking & gateway support from just about its origins (and was major reason that the internal network was larger than the whole arpanet/internet up thru about '85). A typical RSCS/VNET might have some number of "native" drivers as well as JES2/NJE drivers ... as well as special drivers.

Because of the shortcomings in the NJE implementation (frequently mixing all sorts of stuff in their headers) ... it was not uncommong to find a RSCS node sitting between two JES2/NJE nodes ... where the RSCS node had drivers for all the different versions of NJE ... and would provide the appropriate header conversions in order to keep different JES2/NJEs from crashing each other.

Because of various corporate issues, the native RSCS/VNET "native" drivers eventually came into disuse in favor of standardized NJE protocol (in part because RSCS/VNET operation was relatively protocol neutral while NJE was tightly wedded to its protocol ... even tho RSCS/VNET "native" drivers might have significantly higher thruput), but it still continued to be common to find RSCS/VNET being intermediate nodes between NJE nodes ... minimizing their ability to corrupt and crash each other with version and/or maintenance activity.

random refs:
95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#212 GEOPLEX
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#29 The first "internet" companies?
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001c.html#5 what makes a cpu fast

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

High Level Language Systems was Re: computer books/authors (Re: FA:

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Language Systems was Re: computer books/authors (Re: FA:
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 27 Apr 2001 22:46:17 GMT
"Charlie Gibbs" writes:
To be fair, there were times such a practice could be justified - or at least rationalized. Programmers had rock-bottom priority on our (single-tasking) Univac 9300, which was kept busy all day long running production jobs. In the case of one large program which took 20 minutes to assemble, it was often easier to patch the executable, especially if the fix was needed RIGHT NOW. This would go through several iterations over time, until eventually the program would collapse under the weight of 30 or 40 REP cards. Only then would I edit the source deck with all of the changes I had been marking up the listing with, and try to scrounge enough machine time for a re-assembly.

before I found out about rep cards ... I would repunch the binary original. This was in undergraduate days ... and I would have 48hr shift of dedicated machine time on the weekend and didn't want to waste it on a 30-50 minute re-assemble. I got pretty good at being able to read card punch holes in binary.

random refs:
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#17 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/97.html#21 IBM 1401's claim to fame
https://www.garlic.com/~lynn/98.html#9 Old Vintage Operating Systems
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#59 Living legends
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#130 early hardware
https://www.garlic.com/~lynn/2000.html#79 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#83 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#77 write rings
https://www.garlic.com/~lynn/2000c.html#11 IBM 1460
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#34 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000d.html#46 Charging for time-share CPU time
https://www.garlic.com/~lynn/2000d.html#48 Navy orders supercomputer
https://www.garlic.com/~lynn/2001.html#3 First video terminal?
https://www.garlic.com/~lynn/2001.html#11 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001b.html#0 Java as a first programming language for cs students
https://www.garlic.com/~lynn/2001b.html#22 HELP
https://www.garlic.com/~lynn/2001b.html#26 HELP
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2001b.html#36 [OT] Currency controls (was: First OS?)
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Climate, US, Japan & supers query

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Climate, US, Japan & supers query
Newsgroups: comp.sys.super,comp.arch
Date: Fri, 27 Apr 2001 22:58:14 GMT
David Brower writes:
Program trading makes money, and the analysis that drives it can consume as much cpu as you choose to throw at it. I'm given to believe that such applications are happily looking at the economics of beowulf-like intel clusters nowadays. Classical supercomputers really are dinosaurs whose carcasses are being eaten by the killer micros.

there was a story about a particular system in a large metropolitan skyscrapper (50+ stories) ... the claim was that in 24hrs the system accounted for more profit than 1 year lease on the whole building plus 1 year salary for all people working in the building (conversely, if the system was down for 24hrs ... it didn't make that).

totally random ref:
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/2001e.html#4

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Blame it all on Microsoft

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object
Date: Sun, 29 Apr 2001 17:59:23 GMT
Toon Moene writes:
It's probably because I've been spoiled by CDC. Not only did we get all of the sources with our operating system (and the build jobs - which we threw away and re-did better), but also we learned that it was a good thing (technically, not just socially) to return our fixes for everyone to share - that way they would be incorporated in the standard distribution, which would save us a headache next time round ...

CP/67 and then VM/370 were both distributed in source. I believe that at one time there was some analysis that there was twice as much kernel code (modifications) on the share waterloo tape as code in the distributed kernel.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Mon, 30 Apr 2001 23:58:23 GMT
echomko@polaris.umuc.edu (Eric Chomko) writes:
Yes, they did somehow. So the standarization of those internetworking became TCP/IP, which back between 1969 and 1978? did not exist by the name: TCP/IP.

Your point is well taken, however, in that the OSI Reference model is more of a design construct than an implementation construct. But we all know that sometimes you have to implement it first and then design it later! :)


there are RFCs for TCP on arpanet/NCP predating the IP work.

The thing that IP brought was the internetworking layer ... along with gateways .... something that really wasn't in OSI either ... much more traditional hierarchical homogeneous protocol stack ... rather than allowing for a lot of heterogeneous networking.

the great cut-over from arpanet/NCP to IP in 1/1/83 was one of the enabling things allowing big growth spurt in the size of the internet ... so that by sometime in possibly '85 the "internet" was finally larger than the internal network (which effectively had hetergeneous gateway support in its nodes from just about the beginning)

random refs:
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#39 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#212 GEOPLEX
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#1 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#4 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#8 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#10 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#67 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2001b.html#81 36-bit MIME types, PDP-10 FTP
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Wed, 02 May 2001 00:17:08 GMT
Ric Werme writes:
Europe was supposed to embrace OSI thanks to the greater power of the telephony people, but the infusion of free, working TCP/IP code kicked out OSI's foundation. I still have a memo I wrote at DEC soon after the demise of Alliant where I commented on an industry analyst comments about OSI deploying in significantly smaller numbers than previously expected. Like 10% of the old estimate. My claim was that it would be much less.

us gov. w/gosip was dictating adoption of osi also.

misc. gosip refs
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Pre ARPAnet email?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Wed, 02 May 2001 17:40:35 GMT
Lars Poulsen writes:
Since the directory did not exist, it could be theoretically perfect. The fact that email did not actually work was just a temporary setback. The actual mail protocol was good, "and just as soon as directory is completed, it will start working, and be much better than the internet stuff, which requires the user to memorize these arcane codewords such as hostnames and usernames".

i remember being at some DBMS conference circa '90/'91 and listening to a talk about these x.500 networking types busily re-inventing 1960s database technology.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

SIMTICS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SIMTICS
Newsgroups: alt.os.multics
Date: Thu, 03 May 2001 08:41:59 GMT
Barry Margolin writes:
There was a Unix emulator that run on top of Multics. I don't think there was port that ran directly on the iron, but I don't think it would have been infeasible. Unix was ported to IBM mainframes, and I imagine the level of difficulty would be similar (mostly writing all the device drivers).

AT&T had a port of Unix on top of TSS/370 that I believe had fairly wide deployment inside AT&T (i.e. tss/370 supervisor/kernel provided all the devices drivers and lot of page mapped file stuff ... along with misc. services).

There was a port to 360 at Princeton. That work was possibly picked up by Amdahl (corporation, ibm mainframe clones) and evolved into UTS with native 370 device drivers. It saw deployment most frequently in VM virtual machines (analogous to the previous posting about linux)

random refs:
http://www.albion.com/security/intro-3.html
http://www.Amdahl.com:80/cgi-bin/press-index/20000516-001.htm
http://www-sor.inria.fr/mirrors/usenix98/brochure/FREENIXprogram2.html
https://web.archive.org/web/20020304232915/http://www-sor.inria.fr/mirrors/usenix98/brochure/FREENIXprogram2.html
http://www.byte.com/art/9410/sec8/art3.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Thu, 03 May 2001 18:13:48 GMT
Anne & Lynn Wheeler writes:
there are RFCs for TCP on arpanet/NCP predating the IP work.

as an aside, last week 2822 & 2821 came out obsoleting 822 & 821 (presumably took some planning to get 2822 & 2821 reserved for that purpose).

ref:
https://www.garlic.com/~lynn/rfcidx9.htm#2822
https://www.garlic.com/~lynn/rfcidx9.htm#2821
https://www.garlic.com/~lynn/rfcietff.htm

there was use of cp/67, single machine, information files transferred between users in the late '60s.

sometime in the early to mid-70s, i remember being on a business trip in europe and going thru some gyrations to be able to read my email back in the states.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

High Level Language Systems was Re: computer books/authors (Re: FA:

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Language Systems was Re: computer books/authors (Re: FA:
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 May 2001 20:55:07 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes, it did. If I remember correctly, BXH and BXLE were NOT part of the original instruction set, though I cannot remember if they came in with the 370 extensions. They were also very slow on some of the models, and it could be quite a lot faster not to use them. This then changed, of course.

IBM system/360 reference data greencard (gx20-1703)

branch on index high      BXH       RS         86       R1,R3,D2(B2)
branch on index low
         or equal         BXLE      RS         87       R1,R3,D2(B2)

I don't remember them being that slow (at least on 65/67).

Translate and Translate & Test instructions were slow ... it was relatively easy to show loops (with bxh/bxle) as being faster than TR & TRT.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

High Level Language Systems was Re: computer books/authors (Re: FA:

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Language Systems was Re: computer books/authors (Re: FA:
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 May 2001 23:34:53 GMT
johnl@iecc.com (John R Levine) writes:
Hmmn, let me check my 360/67 Functional Characteristics (which you can still order from IBM's web site, by the way.) On a Model 67-1, a BCT took 1.15us, while BXH or BXLE took 1.6us for no branch, 1.4us for branch taken. Considering that you'd typically need a separate add or subtract with BCT to update the control variable, which would take another 0.65us if the addend were in a register or 1.4us if not, they don't look slow to me.

i couldn't get list for 360/67 ... but they did list s360 model 30 as still available

they also have

s/360 s/370 s/390 i/o interface channel to control unit OEMI (ga22-6974-10) and its available in softcopy (aka other equipment manufactur interfact or pcm ... plug compatible manufactur)
http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?SSN=01ECW0004681095661
This publication provides a functional description of the interface lines between Sys/360, Sys/370, Sys/390 channels and control units designed by any manufacturer to operate with this I/O interface, said to be the "parallel I/O" interface. It does not cover the interface between the control unit and the I/O device, nor does it cover the Sys/390 ESCON I/O interface. This publication is intended for designers of programs and equipment associated with the parallel-I/O interface and for service personnel maintaining that equipment, but anyone concerned with the functional details of this interface will find it useful.

we had it a little harder when we did it the first time ... random ref

https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Fri, 04 May 2001 05:38:15 GMT
Ric Werme writes:
Here's that memo, somewhat sanitized, probably less than should be since not all words are mine. I'll compensate by not changing the subject line above. ;-)

and a 4/1 memo on the same subj3ct
From: meese@kremvax.arpa
Newsgroups: comp.protocols.tcp-ip,comp.protocols.iso
Message-ID: <880401@kremvax.arpa>
Date: 1 Apr 88 00:00:01 GMT
Posted: Fri Apr 1 00:00:01 1988

WASHINGTON -- In a simultaneous announcement that took the computer industry by surprise, OSI leaders today said that they were abandoning their effort to promote the OSI Protocol Suite in favor of the existing US Department of Defense (DoD) ARPANET Protocol Suite.

The official reason cited for the decison was a new report from the Office of Technology Assessment stating that the manpower required to fully implement and test even the few OSI protocols that are now defined would consume the entire output of American university computer science programs for the rest of the century, and that printing and distributing the necessary protocol specifications would consume the entire American and Canadian paper supplies for the next five years.

However, one high-placed source speaking on condition of anonymity said, ''The whole OSI thing was a practical joke one of the guys cooked up a few years ago. Nobody ever expected anybody to take it seriously. I mean, who would believe an organization supposedly dedicated to tearing down barriers to free and open communications between computers when it's run by a former director of the National Security Agency? I guess computer people are a lot more gullible than we thought. We kept dropping hints, making the whole thing more and more ridiculous. We hoped that people would eventually catch on, but it didn't work. Finally, our consciences got to us.''

In related news, officials at the Mitre Corporation in Bedford, Massachussetts reported that one of their employees, as yet publicly unidentified, froze ''as solid as stone'' when he heard the announcement. Medical experts have as yet been unable to communicate with the victim or get him to relax his facial muscles, which are reportedly locked into what was described as an ''enormous grin''.

AP-NR-04-01-88 0001EST


Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Fri, 04 May 2001 05:49:54 GMT
Ric Werme writes:
Here's that memo, somewhat sanitized, probably less than should be since not all words are mine. I'll compensate by not changing the subject line above. ;-)

and a 4/20/89 "trip report" regarding ansi x3s3.3 meeting on submission of HSP (high speed protocol) proposal

A "high speed networking & transport protocol" proposal was submitted by the xtp people at the x3s3.3 meeting. After various discussions it was decided to submit a "study proposal for high speed protocols" to the x3 committee ... the work product of which will be some number of protocol proposals.

Problems with the original protocol proposal were numerous. Many people objected to it violating the OSI reference model (and in fact it is not possible to submit a protocol proposal to X3 that violates the reference model ... although it is possible to approve an ANSI standard that does violate the reference model ... but that takes some fine work ... case in point are the LAN protocols ... especially with FDDI coming up thru level 1 and 2 well into level 3).

The other camps were that existing protocols could be modified ... and then of course the XTP camp. Existing protocol modification camp doesn't adequately take into account that hardware/technology (x3s3.3 is responsible for levels 3 & 4) is eating them from below (and high-speed protocol standard will have to face that reality).

The current plan is to attempt having the work group responsible for the high-speed protocol study to co-schedule the meetings with the XTP TAB meetings.

Also during the meeting, there was mention of a recent paper from Berkeley that mentioned they have done some sort of enhanced perfomance TCP/IP that gets the pathlength down to 200 instructions (modifications to mbuffs, timer handling, interrupt handling, etc).


Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Fri, 04 May 2001 06:13:00 GMT
Ric Werme writes:
Here's that memo, somewhat sanitized, probably less than should be since not all words are mine. I'll compensate by not changing the subject line above. ;-)

and 1990 european work item. basically osi had been a telco, slow-speed, high-error rate point-to-point copper wire design point. Even IEEE WANs and LANs violated the OSI model.
ESPRIT Project Nr 5341

1. Title of the project

High performance OSI protocols with multimedia support on HSLAN's and B-ISDN

2. Acronym

OSI 95

3. Origin

In the framework of the second call for proposals of ESPRIT II, the proposal for this project has been submitted to the Commission of the European Communities on January 1990, by the following consortium:

BULL SA (France), Coordinating contractor
ALCATEL-BELL TELEPHONE (Belgium)
ALCATEL-AUSTRIA ELIN (Austria)
INRIA (France)
OLIVETTI RESEARCH LIMITED (U.K.)
UNIVERSITE DE LIEGE (Belgium)
UNIVERSITY OF LANCASTER (U.K.)

After evaluation by the experts of the Commission, the proposal was shortlisted for further review in April 1990.

After discussion with the Commission, the project has been accepted for two years and with the following extended consortium:

BULL SA (France), Coordinating contractor
(with INT (Institut National des Telecommunications) as Associae Contrator)
ALCATEL-BELL TELEPHONE (Belgium)
ALCATEL-AUSTRIA ELIN (Austria)
INRIA (France)
INTRACOM (Greece)
OLIVETTI RESEARCH LIMITED (U.K.)
UNIVERSITE DE LIEGE (Belgium)
UNIVERSIDAD POLITECNICA DE MADRID, DIT (Spain)
UNIVERSITY OF LANCASTER (U.K.)

The project started officially on October 29, 1990.

4. Summary

OSI 95 will revisit the OSI Reference Model in many of its aspects from layer 2 up to the application level with only one objective: the design of high performance protocols for the new communication and application environments. This project, which lasts only two years is the first step towards this goal.

Today, the potential bandwidth offered by the new communication environments such as HSLAN's MAN's and soon by the B-ISDN is jeopardized by the existing OSI protocols of layers 3 to 7. On the other hand, new requirements are coming from the application layer which tries to accommodate the evolving computing environment, such as multimedia, ODP or new distributed systems.

The objective of OSI 95 is to integrate high-speed MAN and WAN into the OSI Reference Model and to revisit the OSI protocols of layers 3 to 7, in order to offer adequate high performance services up to the application layer.

A first major step is the design, the formal specification and the validation of a high performance transport - internet protocol, called TPX, based on the standard LLC type 1 service and offering the standard transport connection-mode service. It has been considered important that the underlying and provided services of TPX be the current OSI standards in order to guarantee an easy migration. Moreover, the design of TPX will take into account as many criteria as possible to allow later on an easy implementation on silicon.

The project will, in parallel, actively support the creation of one or several new Work Items on "High Speed Protocols and Services" in various standardization environments (National Associations, ECMA, ISO, ETSI, ...), a prerequisite to the standardization of TPX.

It is however very likely that the current LLC and transport services will be inadequate to cover all the new communication and application environments. Therefore, in preparation of the design of a variant of TPX which will take place after the first two years, the second major step of the project will be the study of the lower and upper protocols in order to propose new LLC, transport and application services.

On the LLC side, ATM-based networks will be analyzed in order to characterize and specify the provided services. This is an essential step towards the integration of these networks into the OSI world without relying on gateways which are very often performance killers.

Above the transport layer, the objective of the project is the study of the evolving computing environment in order to define more adequate upper layer services and protocols, and the new transport services they will require.

Three elements of this environment have been identified and will be studied in detail:

- multimedia : many new applications will require the handling of multimedia objects composed of voice, text, image and video, which require new facilities not currently defined in the OSI standards

- new distributed systems : the ESPRIT projects COMMANDOS and ISA illustrate that trend. The current OSI protocols do not offer them a suitable service. Therefore we will evaluate their needs in order to propose solutions which will be fully compatible with the OSI world

- ODP : The support environment for ODP must provide distribution transparencies which include access, location, concurrency, replication and migration transparency. It is clear that investigation is needed to avoid duplication of functions between the SE-ODP and the communication systems.

With respect to these three elements, the project will focus on high performance issues related to the session and presentation layers, and on the synchronization functions.

It is the intent of the project to specify as much as possible the new protocols and services in one of the ISO languages, Estelle and LOTOS. It is however very likely that the new communication and application environments will require some extensions to these languages. In particular, the need for the introduction of time in LOTOS has been anticipated and some solutions will be studied by the project.

It is envisaged that this two year project will be continued during another three years to achieve the global goal of the project : make OSI an efficient solution for the systems that we will have in 1995.

5. Contacts

Jacques Levasseur, Project Manager      Tel: +33 1 39 02 48 67
BULL SA                                 Fax: +33 1 39 02 48 18
68, route de Versailles                 Telex: 697030 F
BP 3                                    E-mail:
F-78430 Louveciennes                    Jacques.Levasseur@dprdrcg.bull.fr
France


Andre Danthine, Professor Tel: +32 41 56 26 91 UNIVERSITE DE LIEGE Fax: +32 41 56 29 89 Institut d'electricite B28 Telex: 41797 saunlg b B-4000 Liege E-mail: danthine@BLIULG11.bitnet Belgium
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Fri, 04 May 2001 13:58:48 GMT
vek@pharmnl.ohout.pharmapartners.nl (Villy Kruse) writes:
I, as a client, need to see your server certificate so I can check you are who you claim to be. If you are not, then I can't trust your server for anything.

For test environment, or a closed intranet server a homemaid certificate is perfectly OK, but not for a public server where the identity of the server is important.


there are a huge number of connections made in the world each day w/o SSL where the clients pretty much trust the server (w/o needing SSL, server certificates, CA certificates, CA policies, etc).

somebody may desire to have an SSL session to address the case of evesdropping even when they pretty much already trust that they are talking to who they think they are talking to (separating the two SSL issues of 1) privacy and 2) authentication).

furthermore, there has been some amount of discussion that all of the server certificates is so much fabrication (even the highest quality certificates) with regard to do you really trust that you are talking to who you think you are talking to.

For the most part, internet depends on the domain name infrastructure to make sure that you are talking to who you think you are talking to. A SSL server certificate tends to have a domain name that the client cross-checks with the domain name that it is using and if they match, supposedly the client is talking to who it thinks it is talking to.

The SSL server certificate then addresses possible integrity problems in the domain name infrastructure.

However, who is the authoritative agency that all the SSL server certificate issuing CAs must contact regarding domain name ownership (when a SSL server certificate is being requested)? It is the same domain name infrastructure that supposedly has integrity problems necessitating the use of SSL server certificate.

At the very core, after peeling back all the CA issuing processes, all the cryptography, all the CA practice statements, etc., CAs issuing SSL server certificates have to contact the domain name infrastructure as the authoratative agency with regard to domain name ownership. This is the very same domain name infrastructure that supposedly has the integrity problems that give rise to the necessity for issuing and using SSL server certificates for purpose of authentication (but, in fact, is what SSL server certificates are ultimately based on).

There is some work in the domain name infrastructure that involves registering a public key at the same time a domain name is registered. This would address a lot of the integrity problems that face CAs when faced with the issue of whether they can actually trust the domain name authoritative agencies with regard to domain name ownership.

The interesting thing is that if public keys were registered with the domain name infrastructure at the same time domain names are registered (improving the integrity of the domain name infrastructure with respect to domain name ownership so that CAs can trust the information), then supposedly the domain name infrastructure could server up those same keys at the same time they perform the domain name to ip address service. In effect, the solution that allows trusted SSL server certificates to be really depended on, would also obsolete the need for the SSL server certificates.

Furthermore, having the domain name infrastructure register and serve up domain name public keys would be a much more efficient SSL implementation; effectively serves up real-time public keys (w/o the need for even having CRLs and/or other certificate revokation/status protocols) as well as much lower overhead (compared to all the certificate traffic in the current SSL protocol).

random refs:
aadsm2.htm#inetpki A PKI for the Internet (was RE: Scale (and the SRV
aadsm2.htm#integrity Scale (and the SRV record)
aadsm3.htm#kiss4 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59 ... fyi
aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
aepay3.htm#sslset2 "SSL & SET Query" ... from usenet group
aepay4.htm#comcert Merchant Comfort Certificates
aepay4.htm#comcert3 Merchant Comfort Certificates
aepay4.htm#comcert5 Merchant Comfort Certificates
aepay4.htm#comcert9 Merchant Comfort Certificates
aepay4.htm#comcert10 Merchant Comfort Certificates
aepay4.htm#comcert11 Merchant Comfort Certificates
aepay4.htm#comcert12 Merchant Comfort Certificates
aepay4.htm#comcert13 Merchant Comfort Certificates
aepay4.htm#comcert14 Merchant Comfort Certificates
aepay4.htm#comcert16 Merchant Comfort Certificates
aepay4.htm#dnsinteg2 Domain Name integrity problem
aepay4.htm#3dssl VISA 3D-SSL
aepay6.htm#gaopki4 GAO: Government faces obstacles in PKI security adoption
2000b.html#40 general questions on SSL certificates
2000b.html#93 Question regarding authentication implementation
2000e.html#40 Why trust root CAs ?
2000e.html#47 Why trust root CAs ?
2000e.html#50 Why trust root CAs ?
2000e.html#51 Why trust root CAs ?
2000g.html#25 SSL as model of security
2001c.html#8 Server authentication
2001c.html#9 Server authentication
2001c.html#62 SSL weaknesses

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Fri, 04 May 2001 17:18:02 GMT
alun@texis.com (Alun Jones) writes:
And we've recently discovered that one of the biggest "trustworthy third parties" doesn't even bother to check identities on a company as important as Microsoft. How do you know which third party is trustworthy?

note that the in the case of SSL domain name certificates ... the company details in the certificate have almost never been verified by anybody (requires them to actually visually inspect the SSL domain name server certificate).

as a result, even if all the TTP (trusted third parties) always executed the absolute highest standards with respect to corporate information and never made a mistake ... it is relatively pointless for SSL domain name server certificate, since nobody looks at that information.

the information that is checked in the SSL protocol is the domain name (i.e. does the domain name specified in the certificate match the one that the client is using ... so the client is probably actually talking to the server that it thinks it is talking to).

However, all CAs have to check with the domain name infrastructure as to the true owner of a domain name prior to issuing an SSL domain name server certificate. They have no other choice, the domain name infrastructure is the authoratative agency with respect to domain name ownership.

TTP CAs can do all the checking they want to with regard to corporate names and it doesn't really mean anything (in the case of SSL domain name server certificates) since effectively nobody looks at that information anyway. The information that is verified in the SSL protocol is the domain name ... and for that all TTP CAs have to rely on the domain name infrastructure as the authoritative agency as to the validity of domain name ownership.

However, it is integrity issues with those exact same domain name infrastructures that supposedly is the justification for having SSL certificates in the first place (at least as far as authentication issues are concerned).

Now, the interesting thing is that most of the fixes for the domain name infrastructure that would resolve integrity issues from the standpoint can a TTP CA trust the domain name ownership authoritative agency (i.e. the domain name infrastructure) also pretty much eliminate justification for having SSL domain name server certificates (in so far as authentication issues are concerned).

random refs:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Fri, 04 May 2001 21:22:17 GMT
Bernie Cosell writes:
Obsoleting? have 2822 and 2821 become standards already? I thought they were just at the beginning of the standards track....

it is one of glitches in the IETF standards process.

An RFC is published and it lists the RFCs that it "obsoletes" (check the actual RFC and/or the rfc editor's announcement).

While the standard documented by 821 & 822 are not obsoleted (possibly until the protocols documented by 2822 & 2821 become standards) ... RFCs 821 & 822 that document the standard are obsoleted by RFCs 2822 & 2821.

I guess the "bug" in the process gets resolved by differentiating between the RFC that documents the standard ... and the standard itself.

It is one of the things that I started catching early on trying to instantiate the whole process with knowledge patterns. Postel use to include the obsolete list as section 6.10 in STD1s. The more recent generation of of STD1s have eliminated that section but I still maintain the information at:
https://www.garlic.com/~lynn/rfcietff.htm

current list of Obsoleted RFCs that are standards:
https://www.garlic.com/~lynn/rfcietf.htm#obsol

verbage taken from RFC2000, STD1.
6.10. Obsolete Protocols

Some of the protocols listed in this memo are described in RFCs that are obsoleted by newer RFCs. "Obsolete" or "obsoleted" is not an official state or status of protocols. This subsection is for information only.

While it may seem to be obviously wrong to have an obsoleted RFC in the list of standards, there may be cases when an older standard is in the process of being replaced. This process may take a year or two.

Many obsoleted protocols are of little interest and are dropped from this memo altogether. Some obsoleted protocols have received enough recognition that it seems appropriate to list them under their current status and with the following reference to their current replacement.

Thanks to Lynn Wheeler for compiling the information in this subsection.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM Reference cards.

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Reference cards.
Newsgroups: bit.listserv.ibm-main
Date: Fri, 04 May 2001 21:30:39 GMT
Rob.Schramm@53.COM (Schramm, Rob) writes:
Ed,

I think dayglow orange. That way it would be easier to find when I lose it under the mass of paper on my desk. When it comes to manuals ... I lament the loss of paper manuals. Sure the CD's are easier to search... but they are tougher to flip pages. :)

Future: Son, give up the manual.. we have you surrounded. Never! he cried and off in the distance a tree sighed relief.


the VMSHARE Users Guide reference card (dated January 1980) comes very close to dayglow orange.

the next closest is The Virtual Machine Products Commands reference card (SX20-4401-0, sept. 1980) which is closer to a dayglow pink.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Fri, 04 May 2001 21:46:57 GMT
Bernie Cosell writes:
Obsoleting? have 2822 and 2821 become standards already? I thought they were just at the beginning of the standards track....

it is actually slightly more complicated .... i typically drive my information off the rfc editor announcements (along with the most recent STD1) and only periodically cross-check with the actual RFCs.

Turns out the rfc editor announcement for RFC2821 listed it as obsoleting 821 and 974 (and updating 1123).

821 is STD 10 (along with 1869 and 1870) 974 is STD 14 1123 is STD 3 (along with 1122)

cross checking RFC2821 just now, it lists as obsoleting 821, 974, and 1869 (in addition to updating 1123). The RFC editor announcement had missed the reference to obsoleting 1869.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

High Level Language Systems was Re: computer books/authors (Re: FA:

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Language Systems was Re: computer books/authors (Re: FA:
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 04 May 2001 23:00:07 GMT
Toon Moene writes:
Sigh - and I thought Ed Post ("REAL PROGRAMMERS DON'T WRITE PASCAL") dreamed that one up:

... from someplace in the archives

Real Programmers Don't Eat Quiche

Real Programmers don't eat quiche. They like Twinkies, Coke and palate-scorching Szechwan food.

Real Programmers don't write application programs, they program right down on the bare metal. Application programming is for dullards who can't do systems programming.

Real Programmers don't write specs. Users should be grateful for whatever they get; they are lucky to get any programs at all.

Real Programmers don't comment their code. If it was hard to write, it should be hard to understand and harder to modify.

Real Programmers don't document. Documentation is for simpletons who can't read listings or the object code from the dump.

Real Programmers don't draw flowcharts. Flowcharts are (after all) the illiterate's form of documentation. Cavemen drew flowcharts; look how much good it did for them.

Real Programmers don't read manuals. Reliance on a reference is the hallmark of the novice and the coward.

Real Programmers don't write in COBOL. COBOL is for gum-chewing dimwits who maintain ancient payroll programs.

Real Programmers don't write in FORTRAN. FORTRAN is for wimp engineers who wear white socks. They get excited over finite state analysis and nuclear reactor simulation.

Real Programmers don't write in PL/I. PL/I is for insecure anal retentives who can't choose between COBOL and FORTRAN.

Real Programmers don't write in BASIC. Actually, no programmers write in BASIC after reaching puberty.

Real Programmers don't write in APL, unless the whole program can be written on one line.

Real Programmers don't write in LISP, because only faggot programs contain more parentheses than actual code.

Real Programmers don't write in PASCAL, BLISS, ADA, or any of those other sissy computer science languages. Strong typing is a crutch for people with weak memories.

Real Programmers' programs never work right the first time. But if you throw them on the machine they can be patched into working order in "only a few" 30-hour debugging sessions.

Real Programmers never work 9 to 5. If any Real Programmers are around at 9 AM, it's because they were up all night.

Real Programmers don't play tennis, or any other sport which requires a change of clothes. Mountain climbing is OK, and Real Programmers wear climbing boots to work in case a mountain should suddenly sprong up in the middle of the machine room.

Real Programmers disdain structured programming. Structured programming is for compulsive neurotics who were prematurely toilet-trained. They wear neckties and carefully line up sharp pencils on an otherwise clear desk.

Real Programmers don't like the Team Programming concept. Unless, of course, they are the Chief Programmer.

Real Programmers have no use for managers. Managers are a necessary evil. They exist only to deal with personnel bozos, bean counters, senior planners, and other congenital defectives.

Real Programmers scorn floating point arithmetic. The decimal point was invented for pansy bedwetters who are unable to "think big".

Real Programmers don't drive clapped-out Mavericks. They prefer BMWs, Lincolns, or pickup trucks with floor shifts. Fast motorcycles are highly regarded.


=========================
Real Software Engineers Don't Read Dumps

Real software engineers don't read dumps. They never generate them, and on the rare occasions that they come across them, they are vaguely amused.

Real software engineers don't comment their code. The identifiers are so mnemonic they don't have to.

Real software engineers don't write applications programs, they implement algorithms. If someone has an application that the algorithm might help with, that's nice. Don't ask them to write the user interface, though.

Real software engineers eat quiche.

If it doesn't have recursive function calls, real software engineers don't program in it.

Real software engineers don't program in assembler. They become queasy at the very thought.

Real software engineers don't debug programs, they verify correctness. This process doesn't necessarily involve executing anything on a computer, except perhaps a Correctness Verification Aid package.

Real software engineers like C's structured constructs, but they are suspicious of it because they have heard that it lets you get "close to the machine."

Real software engineers play tennis. In general, they don't like any sport that involves getting hot and sweaty and gross when out of range of a shower. (Thus mountain climbing is Right Out.) They will occasionally wear their tennis togs to work, but only on very sunny days.

Real software engineers admire PASCAL for its discipline and Spartan purity, but they find it difficult to actually program in. They don't tell this to their friends, because they are afraid it means that they are somehow Unworthy.

Real software engineers work from 9 to 5, because that is the way the job is described in the formal spec. Working late would feel like using an undocumented external procedure.

Real software engineers write in languages that have not actually been implemented for any machine, and for which only the formal spec (in BNF) is available. This keeps them from having to take any machine dependencies into account. Machine dependencies make real software engineers very uneasy.

Real software engineers don't write in ADA, because the standards bodies have not quite decided on a formal spec yet.

Real software engineers like writing their own compilers, preferably in PROLOG (they also like writing them in unimplemented languages, but it turns out to be difficult to actually RUN these).

Real software engineers regret the existence of COBOL, FORTRAN and BASIC. PL/1 is getting there, but it is not nearly disciplined enough; far too much built in function.

Real software engineers aren't too happy about the existence of users, either. Users always seem to have the wrong idea about what the implementation and verification of algorithms is all about.

Real software engineers don't like the idea of some inexplicable and greasy hardware several aisles away that may stop working at any moment. They have a great distrust of hardware people, and wish that systems could be virtual at ALL levels. They would like personal computers (you know no one's going to trip over something and kill your DFA in mid-transit), except that they need 8 megabytes to run their Correctness Verification Aid packages.

Real software engineers think better while playing WFF 'N' PROOF.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object,alt.folklore.computers
Date: Sat, 05 May 2001 02:41:56 GMT
"Glenn C. Everhart" writes:
DECnet has some interesting features: needs no ARP, has link level access passwords. The file copy has an end to end CRC (which saved the company I worked for a few times; memory problems in routers were shown up by it). It supports a distributed file system and has done so since the late 70s at least. The major problem with Phase IV was that its addresses were too short; you had only 16 bits for node address (broken into 64 areas of 1024 addresses each). The USG insistence that it would deep-six TCP/IP and force a move to OSI caused DECnet phase IV to be started maybe 1985 (or even earlier) as an OSI implementation but it wasn't finished for something like 10 years as it became clear to everyone that tcp/ip was not going away anytime soon.

there has been some OSI & gosip discussion in the "Pre ARPAnet email" thread in alt.folklore.computers

misc random refs:
https://www.garlic.com/~lynn/2001e.html#17
https://www.garlic.com/~lynn/2001e.html#18
https://www.garlic.com/~lynn/2001e.html#23
https://www.garlic.com/~lynn/2001e.html#24
https://www.garlic.com/~lynn/2001e.html#25
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

and blast from the past
Subject: "DEC details 18-month Phase V plan" Source: Network World, 9/17/90, pg 1, Tom Smith

Bud Haber, Hughes Aircraft manager of advanced network integration o "extreme disappointment" with the 18-month rollout of DECnet Phase V o 'I don't know how many more delays we have to go through' 'The vendor community really <needs to> get serious about doing what needs to be done in opening up their systems' 'I have formally requested of DEC over the past 6 to 8 months a rollout schedule, and they have stonewalled my request'

Hal Folts, Omnicom president o 'All this stuff takes time' 'DEC has a comprehensive plan that I'm quite impressed with'

Audrey Augun, DEC open networks systems manager o DEC X.25 Access for Ultrix V2.0 - 'a significant step toward Phase V' Augun o most Phase V products will be available in the 1st 9 months - X.500 Directory and Virtual Terminal support will be later o Phase V will be done in logical segments 'We feel it behooves us to make absolutely certain that the transition for those people is smooth before we announce the products'

Howard Niden, Price Waterhouse senior manager o DECs failed to deliver key components this month as promised o 'As late as last October...Digital was saying Phase V was on target for September'

Steve Wendler, Gartner Group VP o Phase V was a victim of internal problems 'I think the project has been mismanaged'

David Judson, Wright Patterson AFB, integration technology div. director o the government has already mandated the Gov't OSI Profile o 'The GOSIP train left in August, and that was last month' 'I'm trying to be compliant'


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Sun, 06 May 2001 16:26:22 GMT
bpalmer writes:
The point to having a CA is so you can't forge a SSL signature. SSL is about privacy AND authentication. Using OpenSSL, you can get the privacy part just fine. If that's all you're looking for, then you're fine. If you want your customers to be sure it's you, you need a CA, a 'trusted' third party to sign the cert. That said, the only secure certs would then be the one that pops up for the user to look at and confirm. If you don't need to check and make sure that 1) a CA YOU as the client trusts signed the key and 2) the key points to the company you think it should, it's pointless. I would much rather have the SSL dialog box popup so I can check the cert by hand...

but who does the CA contact to validate the entity requesting an SSL domain name server certificate really owns the domain name (i.e. who is the authoratative agency for domain names that all TTPs have to rely on as to whether somebody really owns the domain name they are requesting a certificate for)?

It is the domain name infrastructure that supposedly has integrity problems which result in the necessity for needing SSL certificates in the first place.

random refs:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

The protocol in SSL doesn't check for whether something points to a company name that you think it should (that is something a person has to do by manually examining the SSL certificate ... something that can be assumed to effectively never happen, or happen so seldom that it is nothing to worry about).

SSL checks the domain name in the certificate against the domain name that the client is using.

The supposedly reason for doing this is that the domain name infrastructure has integrity problems and the client could be mis-pointed to some other server.

However, the TTP CAs also have to check with this very same domain name infrastructure as the authoritative agency for domain name ownership for issuing SSL domain name server certificates.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Blame it all on Microsoft

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blame it all on Microsoft
Newsgroups: comp.os.linux.advocacy,comp.theory,comp.arch,comp.object,alt.folklore.computers
Date: Sun, 06 May 2001 19:23:16 GMT
Paul Repacholi writes:
Another good source is Radia Perlmans book on networking. I suspect it has been, ah, 'well edited' from what she originally said about some of the stuff. Also, PhIV was almost done with much larger addresses.

some misc. other stuff. in this era, there was significant institutional, governmental and public mind-set that things would shortly all be OSI implementations.

One might be tempted to lump the OSI, X.500, and X.509 efforts all together

misc. additional stuff from the era


Subject: Bill Hancock on "The Controversial DECnet Phase V Route"

A very interesting article was found in the June 25th issue of
"digital review".  Bill Hancock, as most DECUS attendees would
agree, is very animated and sometimes controversial when it comes
to networking, DECnet and the surrounding issues.

A quote from the article:

"Running the DECnet Phase V routing algorithm on anything other
than a dedicated network routing system is like asking for a
 voluntary lobotomy".

Source:   Network World, 8/6/90, pg 1, Jim Brown
Occasion: Interview with Robert McCauley, DEC's OSI migration manager

DECnet V is OSI-based and migration problems are expected
o DEC has setup a migration team to assist customers
- headed by Robert McCauley, OSI migration manager
o DEC's Easynet:  a 54,000 node internal network
- three subnets at engineering sites were selected as testbeds
    . Reading England, were routers are developed
. Littleton MA, communications and networking
    . Nashua NH, VMS engineering
. collectively, the subnets were known as TransitionNet, or T-Net
- the subnets connections to Easynet were maintained
. they talked to each other as well as the Phase IV Easynet
  - over the next two quarters, the main part of Easynet will go Phase V
o no production is being done on T-net as yet
  - probably at least 6 months away from running production applications
- DECnet/Ultrix, DEC network management, and routers are on T-net, but
VMS Phase V isnt available outside the engineering development sites

What should users be doing?
o thorough planning involving all organizations involved in the network
  - "it's imperative that people make a good business case for why they
are migrating to Phase V...because there is some cost to it"  McCauley
o the name service has to be carefully looked at
- its much more critical in Phase V than Phase IV
  - how many to use, what platforms, access control
- a hierarchic approach is planned on Easynet
    . at least two servers at any reasonably sized site (about 200 sites)
. 10 superservers at the second level, maybe as many as 20 later
. they will keep master copies of names and node addresses
. Phase V auto-configuration and auto-registration not planned yet
o the name server function is important due to name translation
- simple names are mapped into physical addresses
  - its network-wide, not the VMS commands used today
o customers may need to utilize additional hardware for the function
- as DEC has done
- "It is also not clear, and I guess this is something that has to be
     spelled out to each customer - what the incremental cost of the
hardware would be in a particular case"
  - some capacity for the name server is needed, but it may be offsetting
. in DECs case the site-servers were seen as needed function anyway
. the superservers are delta due to Phase V
- how much extra capacity is needed will be evaluated as DEC migrates
o capacity planning is needed:  Phase V has larger addresses & packets
- "in the worse case, it could be a 20% degradation in circuit
     performance"
- "We do expect some degradation in throughput, at least in the first
version of Phase V routing software"

"Some customers need the multiprotocol and multi-T1 link capabilities"
o DEC Router 2000 cannot drive more than one T1
  - "At this point I don't think we have anything that addresses that"
- "There is a lot going on <in DEC> to come up with <competitive> routers"
- "we have joint development plans with StrataCom, and have made some
of our protocols available to companies like cisco and Wellfleet"

Source:  Communications Week, 9/17/90, pg 1, P. Korzeniowski & A. Knowles

Digital won't "deliver as expected" on its 3 year-old promise on OSI
o DEC had said it would have full OSI support by this year
- DEC announced its DECnet evolutionary program September 1987
    . for DECnet Phase V, VMS, Ultrix
- DECnet Phase V will be another 18 months
    'Our customers want us to proceed more slowly'
. Audrey Augen, DEC open network software marketing manager
- industry analysts say technical problems between with VMS & Phase V
. performance and functionality both unsatisfactory
  - Ultrix, DECs Unix, was demo'ed with Phase V at a recent exhibition
o DEC has concentrated heavily on TCP/IP
  - customers don't seem alarmed
o Stanley Rose, Bankers Trust VP of technical architecture
'Given the choice of an expedient product today and a reliable product
next year, I will opt for the reliable product'
   - Bankers Trust has a DECnet Phase IV network with 1,000 nodes
- they wanted to migrate directly to OSI with Phase V
     . avoiding TCP/IP
'It looks like we'll have to install TCP/IP and we want DEC to provide it'
I believe DECs embrace of TCP/IP is one of the reasons for the delay'

A network management tool was announced, and token-ring support wasn't
o DEC, and the Systems Center, together announced network management tool
  - a joint marketing agreement to link Net/Master and DECmcc
. a new network management tool to challenge IBMs NetView
o token-ring support won't be available as expected either
- the announcement was canceled due to lack of product development funds
    . Steve Wendler, The Gartner Group analyst

random refs:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subpubkey.html#radius

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Wed, 09 May 2001 05:17:44 GMT
JPM@lrz.fh-muenchen.de (Juergen P. Meier) writes:
Some german banks have their certificates signed by their own CA's, coupled with clear and simple instructions how to import this CA's cert into user browsers/homebanking-apps and a clear instruction/warning to be alert when their client-app suddenly mourns about wrong cert's, it's quite a good way. One particular bank does distribute the cert + the banks public key this way: The Client has to show up at the bank, and identify himself with a passport. He then recieves a smartcard with the cert and the key and a card-reader for his pc. This way the client can trust the bank to be the bank (its pretty hard for an attacker to take over a bank's settelment or build up a fake agency) and the bank can trust the identity of the client (if the clerk's have clear instructions to verify the passport/ID of the customer).

SSL server domain name certificate is supposedly because a client after asking the domain name infrastructure to give them the IP-address and contacts the web-server ... it doesn't trust the domain name infrastructure and needs additional verification that it is really talking to the web site it thinks it is talking to.

However, what does the SSL server domain name certificate really represent?

1) somebody applied to a CA stating they wanted a certificate specified with their domain name,

2) the CA contacts the domain name infrastructure to authenticate the entity requesting the SSL domain name server certificate is really entitled to it.

3) but then, supposedly the reason that the client needs the assurance of the ssl domain name server certificate is because the domain name infrastructurew isn't trusted.

Some european banks are issuing client relying-party-only certificates ... i.e. with their own self-signed CA.

The client relying-party-only certificates basically

1) contain only an account number

2) are not "identity" certificates because of the serious privacy issues associated with certificates containing names and other personal information.

3) are not enabled for acceptance by other relying-parties because of the desire not to incure the liability difficulties.

The process to create these relying-party-only certificates

1) client performs the public-key registration process for their account with the banks Registration Authority.

2) the banks Registration Authority does the standard stuff and passes off to the banks Certification Authority

3) The banks Certification Authority performs the standard certification process (validating the association of the client to the account number)

4) The banks Certification Auhtority generates a relying-party-only certificate for the client containing the client's account number and public key

5) The banks Certification Authority saves a copy of the relying-party-only certificate in the client's account record

6) The banks Certification Authority returns a copy of the client's relying-party-only certificate to the client.

7) At some point in the future, the client generates an electronic message or transactions that contains the account number as part of the standard information and digitally signes it with their private key.

8) The client appends the digital signature to the electronic message and then appends the copy of the relying-party-only certificate to the combination of the electronic message and digital signature

9) the client takes the combined message (original electronic message, digital signature, and copy of the relying-party-only certificate) and transmits it to the bank

10) the bank extracts the account number from the message and retrieves the account record which also contains the original of the client's relying-party-only certificate.

11) the bank discards the copy of the relying-party-only certificate transmitted to the bank by the client and uses the original of the relying-party-only certificate just read (instead of the superfluous copy transmitted by the client).

12) using the public key in the original of the client's relying-party-only certificates (resident in the client's account record), the bank verifies the client's digital signature ... validating the correctness of the message and authenticating the sender of the message.

The assertion is that

a) the client when sending signed messages to the bank can improved the payload weight and transmission throughput by compressing the copy of their relying-party-only certificate to zero bytes (using a technique called field compression)

b) since it can be shown that the client will alwas perform field compression on the copy of client's relying-party-only certificate prior to transmitting it to the relying-party ... the bank's (aka relying-party) Certification Authority can improve throughput by precompressing the copy of the client's relying-party-only certificate prior to returning the copy of the client. This precompressed copy of the client's relying-party-only certificate is now only zero bytes and makes it much more efficient compared to using a full-sized certificate.

====================

The other explanation is that it is redundant and superfluous to transmit a copy of a a relying-party-only certificate to the relying-party when it is known that 1) the relying-party possesses the original of the relying-party-only certificate and 2) the relying-party must read the record containing the original of the relying-party-only certificate as part of executing the message and/or transaction service requested by the client (remember the relying-party-only certificate contains nothing but the account number because of serious privacy issues).

aka in a relying-party-only certification environment, it is trivially shown that the digitally signed message a client sends to the bank either contains a zero byte compressed certificate (using certificate field compression) or contains no certificate at all because the bank already has the original of the certificate.

as an aside, is there a method of determining the difference between the lack of a transmitted certificate from a transmitted certificate that has been compressed to zero bytes?

==========================

certificate field compression demonsrates all fields already in the possession of the relying party can be eliminated from the copy of the certificate transmitted to the relying party; since the relying party is known to contain the original of the client's relying-party-only certificate, the assertion is that the original of the client's relying-party-only certificate contains all the same fields that are in the copy of the client's relying-party-only certificate. If it can be shown that all fields in the copy of the relying-party-only certificate are in possession of the relying party (because it posseses the original), then all fields can be compressed from the copy of the client's relying-party-only certificate that the client transmits to the relying party. If all fields can be compressed from the copy of the client's relying-party-only certificate transmitted to the relying party, the assertion is that the resulting compression results in a zero byte certificate appended to the message transmitted to the relying party.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Wed, 09 May 2001 05:24:55 GMT
Christer Palm writes:
I'm sure about that. But it's not who _you_ can trust that counts. It's about who anyone to which you are trying to prove your identity can trust.

note again that the basic premise justifying SSL domain name server certificates is that the client can't really trust the domain name infrastructure to correctly match the client up with the correct web server.

however, the only thing that any certification authority can do when generating a SSL domain name server certificate is to contact the domain name infrastructure to validate that the entity requesting the domain name certificate is the actual entity owning that certificate.

what prevents everybody in the world from requesting amazon.com? basically they can't demonstrate they are the owner of the amazon.com domain registered with the domain name infrastructure. The CA has no way of prooving this one way or another ... only the domain name infrastructure has the record prooving who the actual owner is. The only course of action in the scenerio is for the CA to contact the domain name infrastructure prior to issuing the SSL domain name certificate.

This is true independent of all other factors that might be associated with the dependability and integrity of any particular certification authority. The TTP certification authorities, at best, can contact the actual authority in possession prooving the assertion or binding of information that goes into the certificate.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Wed, 09 May 2001 05:29:32 GMT
Christer Palm writes:
Which is why certificates also carry the real name and location of the company.

but the SSL protocol totally ignores the real name and location of the company. effectively nobody reads and/or saves the ssl domain name server certificates.

it is trivially easier to register totally valid "front companies" that have nothing associated with them.

assuming that the domain name infrastructure is at risk then it is possible to do something ... and then apply for any domain using a perfectly valid "front company" ... and get a certificate issued with that domain name.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

IBM Dress Code, was DEC dress code

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Dress Code, was DEC dress code
Newsgroups: alt.folklore.computers
Date: Wed, 09 May 2001 16:48:59 GMT
jmfbahciv writes:
I've got an IBM coffee bug which is blue and yellow! Yes, bright yellow background. Must have been a faulty batch.

my brother use to be a regional apple rep ... he had this gimmick of fondling and admiring ibm coffee cups at customer locations ... and saying he just had to have one ... and would the customer just possibly part with the cup in return for one or five apple mugs.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Wed, 09 May 2001 17:11:56 GMT
Christer Palm writes:
The thing that I guess one should worry the most about is the possibility of someone aquiring a certificate for a common name or company name that is already "owned" by someone else, allowing this person to masquerade as the valid owner (like the Microsoft/Verisign incident). Today, this would be perfectly possible - the impostor "just" has too fool a trusted CA different from the CA the valid owner uses into signing their certificate. Of course, if the impostor would go to the same CA as with whom the valid certificate is already registered, the warning bell would hopefully go off.

there have been press-releases regarding "domain name" hijacking with regard to somebody convincing a domain name infrastructure to change the ip address and company name for a "domain name".

it then becomes an issue of going to a CA and using that company name.

front companies are very straight forward to setup that totally pass all reasonable checking done by a CA

the original posting way earlier in this thread ... was that the CA (& others) proposal for fixing the domain name infrastructure problem was to have somebody send in both their ip address and their public key for registration when they acquire ownership of a domain name. that doesn't need a certificate.

then anytime somebody communicates with the domain name infrastructure regarding their domain name ... they sign it with their private key and send the message. again they don't need a certificate ... which primarily is a way of distributing public key ... since the domain name infrastructure already is in possession of the public key registered for the domain name.

the registration of that public key then provides the basis for fixing all sorts of domain name infrastructure problems. however, fixing the domain name infrastructure problems .... so that a CA can rely on the infrastructure for issuing SSL domain name server certificates ... also eliminates the reason a client needs to have an SSL domain name server certificate in order to authenticate the web server because it doesn't trust the domain name infrastructure.

If the domain name infrastructure is at risk ... and not only does a client have a problem trusting it ... but also CAs have an issue ... and if CAs have an issue then also should everybody that relies on certificates from those CAs that rely on that information.

If the domain name infrastructure is not at risk ... and/or has been fixed so that CAs can trust it for issuing certificates ... it turns out then that clients also can trust it ... which eliminates client trust issues leading to them wanting SSL server certificates for authenticating webservers (because they can't really trust the domain name infrastructure).

So either the domain name infrastructure is at risk ... for everybody ... or it is not at risk ... but if it is not at risk ... then it is also not at risk for everybody.

Finally, as part of eliminating integrity exposures in the domain name infrastructure where a domain name owner registers both their ip-address and public key (so that CAs can depend on the integrity of the domain name infrastructure for issuing SSL domain name server certificates) ... then it is possible for the domain name infrastructure to distribute a registered public key at the same name they distribute the registered ip-address as part of domain name resolution.

So fixing the domain name infrastructure so CAs can rely on the integrity ... not only eliminates the need for clients to need SSL domain name server certificates for authentication ... but it also provides a way for a much more efficient SSL setup protocol because the client can get the public key at the same time it gets the ip-address (a real time distribution w/o reguiring CRLs, certificates or any of the other CA-based infrastructure gorp).

random refs:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Wed, 09 May 2001 17:18:39 GMT
alun@texis.com (Alun Jones) writes:
The reason the client is looking for assurance in the certificate is that it doesn't trust the DNS entry hasn't been spoofed. The certificate allows it to verify that the site providing that certificate has been verified by the trusted third party, through means _other_ than a DNS lookup, to be the owner of that domain.

there have been instances of domain name hijacking. the only thing that the CA can contact with regard to who actually owns a domain name is the domain name infrastructure.

the proposal to improve the integrity of the domain name infrastructure so that the CA can "trust" the validating of who owns the domain name as part of issuing a SSL domain name server certificate is to have the domain name owner to register a public key at the same time they register the domain name.

however, improving the integrity of the domain name infrastructure (to improve its trust for use by CAs) ... also provides the mechanism for improving the integrity of the domain name infrastructure for everybody in the world ... negating the desire of clients for checking SSL domain name server certificates in order to authenticate the web server.

The CA method of improving the integrity of the domain name infrastructure so that CAs can trust the domain name infrastructure for validating the owner of a domain name (as part of issuing an SSL domain name server certificate) ... also provides the seed for real-time, trusteed distribution of domain name public keys by the domain name infrastructure w/o needing any of the CA certificate, CRL, and/or other gorp. This would also significantly improve the performance of the SSL session setup w/o needing any of the associated certificate handling gorp.

random refs:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Where are IBM z390 SPECint2000 results?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are IBM z390 SPECint2000 results?
Newsgroups: comp.arch
Date: Fri, 11 May 2001 23:01:11 GMT
Sander Vesik writes:
Well, see, the problem here is the same as in security - if the potential loss from having less RAS capability in the Unix solution is less than the cost delta, it is pointless to buy the higher RAS system.

it is difficult to saw ... having worked on both mainframe solutions and having done the original HA/CMP stuff (dlm, fall-over, etc) with my wife ... both have a lot of capability.

however, the mainframe "market" has a service that captures customer machine logs and publishes numbers for mainframe market (including various clones so that you get to see real live comparisons).

one of the discussions at the recent dependable computing workshop was that the "unix" (and other) vendors are somewhat resistant to give up such information ....

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/subtopic.html#disk
http://www.hdcc.cs.cmu.edu/
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/schedule.html

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

OT: Ever hear of RFC 1149? A geek silliness taken wing

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Ever hear of RFC 1149?  A geek silliness taken wing
Newsgroups: bit.listserv.ibm-main
Date: Fri, 11 May 2001 23:33:21 GMT
Peter.Duffy@MAIL.CO.VENTURA.CA.US (Peter Duffy) writes:
For some geek humor:

http://news.bbc.co.uk/hi/english/sci/tech/newsid_1321000/1321176.stm


some other geek humor (recently posted in alt.folklore.computers but from early 80s)
https://www.garlic.com/~lynn/2001e.html#31

in general the ref'ed RFC is in long hallowed tradition of april 1st RFCs

go to
https://www.garlic.com/~lynn/rfcietff.htm

select RFCs listed by term and scroll down to "april1" ... aka

3092 3091 2795 2551 2550 2549 2325 2324 2323 2322 2321 2100 1927 1926 1925 1924 1776 1607 1606 1605 1437 1313 1217 1149 1097 852 748

misc. other
https://www.garlic.com/~lynn/2001d.html#51
https://www.garlic.com/~lynn/2001e.html#23

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Sat, 12 May 2001 06:31:31 GMT
alun@texis.com (Alun Jones) writes:
You still need CRL - after all, if I register example.com, and get my private key and certificate, and then my payments lapse, and someone else buys example.com, there are now two people with certificates saying they own example.com. The first one has to be revoked.

no, the domain name infrastructure handles all that ... there is real-time, online distributed query ... with local caching that have "time-outs".

in the CA case .... it creates (effectively) replicated, R/O distributed copies (originally primarily for offline situations) with very long "time-outs" (aka a certificate is logical a R/O, replicated cached copy of the database entry at the CA ... the CA uses CRLs to distribute the equivalent of "cache" invalidation signals because the lifetimes of the cached copies are on the order of a year).

the domain name infrastructure supports real-time, online, queries, with distributed cached copies ... but with typical time-outs on the order of minutes to hours. as a result, online queries return to the master database much more frequently than typical CRLs would be distbributed (if CRLs for the general SSL domain name server certificate scenerio was something other than somebody's dream).

For the general, SSL domain name server certificate case, the issue of CRLs are purely hypothetical. Furthermore, for the general SSL domain name server certificate case, in theory (again ... these CRL things are purely hypothetical in these cases) ... all CAs issuing SSL domain name server certificates would need to have their CRLs reach all clients in the world that potentially could be remotely interested in contacting the servers in question.

Lets say there are on the order of 100 million clients in the world which could possibly wish to contact servers that make use of SSL domain name server certificates ... and lets say that there are possibly 20 CAs in the world that issue SSL domain name server certificates ... and purely for arguments sake lets assume that these (non-existent) CRLs were distributed once a day by each of these 20 CAs to each of the 100 million possible clients in the world. We are now talking about traffic on the order of some really major spamming.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where are IBM z390 SPECint2000 results?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are IBM z390 SPECint2000 results?
Newsgroups: comp.arch
Date: Sat, 12 May 2001 06:40:20 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Now, this balance is different between the HPC area and the 'transaction processing' area, but even there I have reason to believe that hardware failures have not dominated over software ones since the 1970s. At the high end, which meant mainframes until recently, of course.

a quote from one of the major financial transaction processing services a couple years ago was that they had 100% availability for over six years which they attributed primarily to
• IMS hot-standby • automated operator

random ref:
https://www.garlic.com/~lynn/2001d.html#70

however, my original point about industry reporting services was that the existance of such an industry reporting service (for hardware failures, downtime, soft failures, etc) is indicative of the importance that the customers in the market segment place on availability (i.e. they need metrics in order to make informed decisions) ... conversely, the lack of such a service may indicate that the market segment is still maturing.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VM/370 Resource Manager

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Resource Manager
Newsgroups: alt.folklore.computers
Date: Sat, 12 May 2001 06:57:16 GMT
25 years old.

much of the work i had done while undergraduate in the '60s and was incorporated into CP/67 Release 3.1 and 3.2. It was dropped in the CP/67 to VM/370 conversion ... and then re-introduced.

a couple of the things in getting it released

1) they told me that the company had yet to charge for SCP (aka kernel) softwoare ... there had been fee licenses for application software but this was going to be the first case of fee licenses for operating system software. I got to spend six months off and on with business practices and pricing people formulating how the company would price for operating system software

2) a automated benchmarking methodology was developed as part of the resource manager and over 2000 benchmarks were performed taking three months elapsed time in calibrating and verifying the resource manager.

related information:

https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#fairshare

following is off a "blue letter" wall plaque
VM/370 Resource Management PRPQ Is Announced

5799-ARQ (PRPQ P09006)

The VM/370 Resource Management PRPQ consists of a series of modifications to the VM/370 control program. The PRPQ objective is to improve VM/370 resource management for the larger VM/370 users (Model 155 II and above with one megabyte or more of storage). Based on the workload and options selected by the installation, problem state throughput and terminal response time for interactive transactions can be expected to improve. It is also expected that the system will be able to drive larger number of interactive terminals.

Highlights

Enhancements are made to VM/370 in these areas:

Scheduling ALgorithm. A fair share approach to distribute the resources of the system equally among the users with improved interactive performance on trivial commands.

Page Migration. Designation of preferred paging areas on DASD with migration to other devices is based on how long the pages are unused.

Swaptable Migration. Seldom used segment tables are swapped to DASD thereby freeing up main storage.

Reset Pages and Time Stamp Segments. The working set algorithm improves page selection, while time stamping facilitates page migration and swaptable migration.

Working Set Estimate. Dynamically adjusted multiprogramming levels are achieved by periodic evaluation of total system performance based on feedback control.

Fast Redispatch Extension. The number of cases where fast redispatch implementation is used after privileged instruction simulation and I/O interruptions are increased.

Enable Window. It increases the extent to which VM/370 runs enabled and thus can accept I/O and external interruptions

Set Favored Extension. The specification of multiple users with the set favored percent option is provided.

Indicate Command Extension. Additional performance status data is made available to the systems performance evaluation routine.

Selective Path Length Reductions.

Publications. Documentations consists of updates to the VM/370 User's Guide, a programmer and system logic guide, and an installation guide contained on the distribution tape as Conversational Monitor System (CMS) Disk Dump print files. A memo will be provided with each tape, containing a description of the contents, and instructions for its used.

Marking Information. Regional VM/370 support representatives have been provided with VM/370 Resource Management PRPQ documanation and can provide marketing and assistance.

The monthly charge of $850. Programming service classification is A. Planned availability is July 30, 1976, and it will be distributed as a sparately orderable tape from PID. Pre-installation test is two months.

The VM/370 Resource Management PRPQ is designed to a base of VM/370 Release 3. Program update services will be available with every third VM/370 program level change (PLC) from the initial PRPQ release.

Education. Will be made available on an RPQ basis

Ordering Information. Is on the reverse side.

J. M. Henson Vice President Market Planning

Announced via ITPS, see Memorandom to Branch Managers B76-72

Release Date: May 11, 1976

Distribution: DP managers, marketing representatives, system engineers, and administrative account specialists FE managers and programing support representatives


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Sat, 12 May 2001 07:48:43 GMT
alun@texis.com (Alun Jones) writes:
You still need CRL - after all, if I register example.com, and get my private key and certificate, and then my payments lapse, and someone else buys example.com, there are now two people with certificates saying they own example.com. The first one has to be revoked.

basically in the CA case, an entity generates their public/private key and then contacts the "Registration Authority" portion of a CA to register their public key. The CA then does some other magic and sends back a certificate containing the public key. The "owner" now can forward the certificate along with any communication to relying parties (and the relying party doesn't have to contact the CA).

in the domain name case, an entity generates their public/private key, somehow obtains a IP-address and then registers the public key and the IP-address with the domain name infrastructure for their domain name. The domain name infrastructure doesn't send back anything (much).

Other relying parties wishing to communicate with the domain name ... then ask ask the domain name infrastructure for the IP-address for the specific domain (they can't ask the domain name owner, because they don't yet have the domain name owner's IP-address in order to talk to them). However, the domain name infrastructure is perfectly capable of (and does) returning additional information (besides IP-address) associated with the domain name. In the projected case of registering the public key, the domain name infrastructure could return both the IP-address and the public key as part of a real-time query.

There are no certificates ... and therefor there is no requirement for a certificate invalidation protocol (domain name infrastructure supplies ip-address information today w/o the need for certificates).

The domain name infrastructure already handles the case where somebody else obtains the domain name "example.com" and registers a different IP-address. The case of all the relying parties with the "old" IP-address for the old owner of "example.com" is pretty mute since any caching of the information has time-out typically on the order of minutes or hours. New queries by relying parties get the new ip-address. If a public key was registered for the domain, the domain name infrastructure could also return the appropriate public key in response to a real time query.

The primary difference between the CA design point and the domain name infrastructure ... is that the domain name infrastructure is "online" and the CA design point is for "offline" relying parties ... i.e. relying parties that need to have something verified and they are unable to perform an online connection to the CA in order to get the real time information. Instead, for these offline relying parties, "certificates" act as a "stand-in" for being able to contact the CA directly in real-time.

By comparison, the domain name infrastructure is an "online" design point, relying parties directly contact the domain name infrastructure in real time for information. There is no need for having the (frequently stale) information packaged up into replicated offline packages (aka certificates) and sprayed all over the universe (with the CA having absolutely no idea where all the possible places the stale, replicated information packages might exist).

Furthermore, with the CA having absolutely no idea where all the possible places the stale, replicated information packages (aka certificates) might have propagated to ... in order to perform an information/cache invalidation (aka CRL), it needs to broadcast a message to all possible places. In the case that such a CRL strategy was really ever to come into existance for the general SSL domain name server certificate case, it would probably be mistaken for one of the largest spamming events ever concocted (i.e. attempting to transmit the invalidation information to every potential client relying party in the world).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where are IBM z390 SPECint2000 results?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are IBM z390 SPECint2000 results?
Newsgroups: comp.arch
Date: Sat, 12 May 2001 07:56:27 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes, but this is a bit deceptive. The lesser question is how many of the automatic operator intervention was to sort out hardware problems and how many to sort out software ones. I would guess that they were comparable.

most of the operator requests tend to be to confirm some operation and/or perform some action.

automated operator tends to have a set of heuristics that intercept operator requests and attempt to provide automated action for recognizable situations.

the automated operator operations don't tend to be of the nature of handling hardware and/or (a lot of) software failures ... but more mundane stuff. the issue with automated operator is that human mistakes performing mundane tasks were resulting in service failures (i.e. hardware could be perfect, and majority of the software could be perfect ... but service could still be interrupted because of human error).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where are IBM z390 SPECint2000 results?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are IBM z390 SPECint2000 results?
Newsgroups: comp.arch
Date: Sat, 12 May 2001 14:32:33 GMT
Anne & Lynn Wheeler writes:
however, my original point about industry reporting services was that the existance of such an industry reporting service (for hardware failures, downtime, soft failures, etc) is indicative of the importance that the customers in the market segment place on availability (i.e. they need metrics in order to make informed decisions) ... conversely, the lack of such a service may indicate that the market segment is still maturing.

SLAs (service level agreements) tends to be fairly standard practice way of dealing with IT organizations (either in-house or out-sourced), including penalty clauses for missing targets.

An IT organization is likely to need pretty good metrics about various component & system availability before signing up for some penalty clauses (for instance calculate risk/probability associated with making a profit or not .... or whether the in-house IT executives get incentive bonus that year).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Can I create my own SSL key?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can I create my own SSL key?
Newsgroups: comp.security.unix
Date: Sat, 12 May 2001 16:24:00 GMT
alun@texis.com (Alun Jones) writes:
You still need CRL - after all, if I register example.com, and get my private key and certificate, and then my payments lapse, and someone else buys example.com, there are now two people with certificates saying they own example.com. The first one has to be revoked.

possibly also of some interest part of thread on "The Fundamental Inadequacies of Convential PKI":
https://www.garlic.com/~lynn/aadsm5.htm#conpki

and

The Shocking Truth About Digital Signatures and Internet Commerce
http://www.smu.edu/~jwinn/shocking-truth.htm
https://web.archive.org/web/20020403012718/faculty.smu.edu/jwinn/
NOTE moved to
http://www.law.washington.edu/Faculty/Winn/Publications/The%20Emperor's%20New%20Clothes.htm

Conventional Public Key Infrastructure: An Artefact Ill-Fitted to the Needs of the Information Society
http://www.anu.edu.au/people/Roger.Clarke/II/PKIMisFit.html

The Fundamental Inadequacies of Conventional Public Key Infrastructure
http://www.anu.edu.au/people/Roger.Clarke/II/ECIS2001.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"IP Datagrams on Avian Carriers" tested successfully

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "IP Datagrams on Avian Carriers" tested successfully
Newsgroups: alt.folklore.computers
Date: Mon, 14 May 2001 16:19:10 GMT
ehrice@his.com (Edward Rice) writes:
"IP Datagrams on Avian Carriers," known to supporters and detractors alike as RFC 1149, has been successfully tested by a team of Norwegian networkers. For details, see:

http://www.salon.com/tech/feature/2001/05/10/pigeons/index.html

and for details on the protocol, see:

http://www.neystadt.org/john/humor/rfc-1149.htm


from "Ever hear of RFC 1149? A geek silliness taken wing" in bit.listserv.ibm-main

Peter.Duffy@MAIL.CO.VENTURA.CA.US (Peter Duffy) writes:
For some geek humor:

http://news.bbc.co.uk/hi/english/sci/tech/newsid_1321000/1321176.stm


some other geek humor (recently posted in alt.folklore.computers but from early 80s)
https://www.garlic.com/~lynn/2001e.html#31

in general the ref'ed RFC is in long hallowed tradition of april 1st RFCs

go to
https://www.garlic.com/~lynn/rfcietff.htm

select RFCs listed by term and scroll down to "april1" ... aka

3092 3091 2795 2551 2550 2549 2325 2324 2323 2322 2321 2100 1927 1926 1925 1924 1776 1607 1606 1605 1437 1313 1217 1149 1097 852 748

misc. other
https://www.garlic.com/~lynn/2001d.html#51
https://www.garlic.com/~lynn/2001e.html#23

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT: Ever hear of RFC 1149? A geek silliness taken wing

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Ever hear of RFC 1149?  A geek silliness taken wing
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 14 May 2001 16:44:57 GMT
gsduran@FRESNO.K12.CA.US (Gary Duran) writes:
About 15 years ago, there was a piece in ComputerWorld on master programmers. One of the programmers related a story where they needed to get plans from the south Bay Area--don't remember the city--to Santa Cruz in a fairly short time span, within an hour or so. They looked at land lines--much to slow and expensive. Satelites--not very available. Courier service--ever see traffic around San Jose?

I'm not sure how silly this is ... but i recently posted blue letter tor VM/370 Resource Manager (25th ann. 5/11/76). One of the characteristics was that it was the first "charged-for" SCP code (i.e. there had been fee licences for application code prior to the VM/370 Resource Manager ... but not SCP code; I got to spend a lot of time figuring out stuff with various business practices people related to policy and practices for SCP fee licences).

random ref:
https://www.garlic.com/~lynn/2001e.html#45

two other characteristics

1) for people that remember muscle cars of the '60s and the slogan "The Racer's Edge" ... the module implementing the dynamic adaptive feedback control stuff was DMKSTP (all VM/370 modules carried the 3-letter prefix DMK).

2) dynamic adaptive code tuned the operation to load and configuration in real time; however some of the product people insisted that because the MVS SRM had all these tuning parameters so that system programmers could attenpt to manually tune a system to the load and ocnfiguration ... that the VM/370 RM needed similar tuning parameters before it could be shipped. So a couple of tuning parameters were implemented in a module called DMKSRM. Now, since it is 25 years later ... all the code was published for the implementation and all the formulas for how the parameters worked were published however nobody caught the joke (within the last couple years, I ran into some recent graduate claiming to have been taught the "wheeler scheduler"). in dynamica adaptive feedback control algorithms, besides the straight-forward algorithm there is something called degrees of freedom ... basically bounds on the range that values can take. In any case, the manual tuning parameters had significantly lower degrees of freedom than the dynamic adaptive parameters in the formulas (to some extent a system programmer could do no wrong since the dynamic adaptive code could compensate for most of what they might try and do).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers
Date: Mon, 14 May 2001 17:07:28 GMT
Bernie Cosell writes:
Obsoleting? have 2822 and 2821 become standards already? I thought they were just at the beginning of the standards track....

new STD1 (2800) is out today with new format for some sections ... note verbage for STD4, STD10 and a couple others.


--------   Internet Official Protocol Standards                2800  1
--------   Assigned Numbers                                    1700  2
--------   Requirements for Internet Hosts - Communication     1122  3
              Layers
--------   Requirements for Internet Hosts - Application       1123  3
              and Support
--------   [Reserved for Router Requirements.  See RFC 1812]         4
IP         Internet Protocol                                    791  5
ICMP       Internet Control Message Protocol                    792  5
--------   Broadcasting Internet Datagrams                      919  5
--------   Broadcasting Internet datagrams in the presence      922  5
              of subnets
--------   Internet Standard Subnetting Procedure               950  5
IGMP       Host extensions for IP multicasting                 1112  5
UDP        User Datagram Protocol                               768  6
TCP        Transmission Control Protocol                        793  7
TELNET     Telnet Protocol Specification                        854  8
TELNET     Telnet Option Specifications                         855  8
FTP        File Transfer Protocol                               959  9
SMTP       [Reserved for Simple Mail Transfer Protocol (SMTP).      10
See RFC 2821.]
SMTP-SIZE  SMTP Service Extension for Message Size Declaration 1870 10
MAIL       [Reserved for Internet Message Format.  See RFC          11
              2822.]
NTP        [Reserved for Network Time Protocol (NTP).  See          12
RFC 1305.]
DOMAIN     Domain names - concepts and facilities              1034 13
DOMAIN     Domain names - implementation and specification     1035 13
DNS-MX     [Was Mail Routing and the Domain System (RFC974).        14
              Now Historic.]
SNMP       Simple Network Management Protocol (SNMP)           1157 15
SMI        Structure and identification of management          1155 16
information for TCP/IP-based internets
Concise-MI Concise MIB definitions                             1212 16
MIB-II     Management Information Base for Network Management  1213 17
              of TCP/IP-based internets:MIB-II
EGP        [Was Exterior Gateway Protocol (RFC904).  Now            18
Historic.]
NETBIOS    Protocol standard for a NetBIOS service on          1001 19
              a TCP/UDP transport: Concepts and methods
NETBIOS    Protocol standard for a NetBIOS service on          1002 19
              a TCP/UDP transport: Detailed specifications
ECHO       Echo Protocol                                        862 20
DISCARD    Discard Protocol                                     863 21
CHARGEN    Character Generator Protocol                         864 22
QUOTE      Quote of the Day Protocol                            865 23
USERS      Active users                                         866 24
DAYTIME    Daytime Protocol                                     867 25
TIME       Time Protocol                                        868 26
TOPT-BIN   Telnet Binary Transmission                           856 27
TOPT-ECHO  Telnet Echo Option                                   857 28
TOPT-SUPP  Telnet Suppress Go Ahead Option                      858 29
TOPT-STAT  Telnet Status Option                                 859 30
TOPT-TIM   Telnet Timing Mark Option                            860 31
TOPT-EXTOP Telnet Extended Options: List Option                 861 32
TFTP       The TFTP Protocol (Revision 2)                      1350 33
RIP1       [Was Routing Information Protocol (RIP).  Replaced       34
              by STD 56.]
TP-TCP     ISO transport services on top of the TCP:           1006 35
              Version 3
IP-FDDI    Transmission of IP and ARP over FDDI Networks       1390 36
ARP        Ethernet Address Resolution Protocol: Or converting  826 37
network protocol addresses to 48.bit Ethernet
              address for transmission on Ethernet hardware
RARP       Reverse Address Resolution Protocol                  903 38
--------   [Was BBN Report 1822 (IMP/Host Interface).  Now          39
Historic.]
IP-WB      Host Access Protocol specification                   907 40
IP-E       Standard for the transmission of IP datagrams        894 41
              over Ethernet networks
IP-EE      Standard for the transmission of IP datagrams        895 42
              over experimental Ethernet networks
IP-IEEE    Standard for the transmission of IP datagrams       1042 43
over IEEE 802 networks
IP-DC      DCN local-network protocols                          891 44
IP-HC      Internet Protocol on Network System's HYPERchannel: 1044 45
Protocol specification
IP-ARC     Transmitting IP traffic over ARCNET networks        1201 46
IP-SLIP    Nonstandard for transmission of IP datagrams        1055 47
over serial lines: SLIP
IP-NETBIOS Standard for the transmission of IP datagrams       1088 48
              over NetBIOS networks
IP-IPX     Standard for the transmission of 802.2 packets      1132 49
              over IPX networks
ETHER-MIB  Definitions of Managed Objects for the Ethernet-    1643 50
like Interface Types
PPP        The Point-to-Point Protocol (PPP)                   1661 51
PPP-HDLC   PPP in HDLC-like Framing                            1662 51
IP-SMDS    Transmission of IP datagrams over the SMDS Service  1209 52
POP3       Post Office Protocol - Version 3                    1939 53
OSPF2      OSPF Version 2                                      2328 54
IP-FR      Multiprotocol Interconnect over Frame Relay         2427 55
RIP2       RIP Version 2                                       2453 56
RIP2-APP   RIP Version 2 Protocol Applicability Statement      1722 57
SMIv2      Structure of Management Information Version         2578 58
              2 (SMIv2)
CONV-MIB   Textual Conventions for SMIv2                       2579 58
CONF-MIB   Conformance Statements for SMIv2                    2580 58
RMON-MIB   Remote Network Monitoring Management Information    2819 59
              Base
SMTP-Pipe  SMTP Service Extension for Command Pipelining       2920 60
ONE-PASS   A One-Time Password System                          2289 61

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Mon, 14 May 2001 17:40:51 GMT
Anne & Lynn Wheeler writes:
new STD1 (2800) is out today with new format for some sections ... note verbage for STD4, STD10 and a couple others.

also showed up today were a number of "old" rfcs recently converted to machine readable from hardcopy

rfc3, rfc5, rfc6, rfc21, rfc23, rfc24, rfc25, rfc27, rfc28, rfc29, rfc30, rfc344, rfc567, rfc593

RFC6 ... discussion about BB&N providing character code conversion. This isn't an easy problem (in many cases). While undergraduate in '68 I had put TTY/ASCII support into CP/67 ... which was incorporated and distributed as part of the standard release. There were some codes that it was very difficult to provide symmetric conversion for ... at least in one case, I tried to map characters in ASCII to valid EBCDIC because I needed some character in ASCII. On the 2741, "at"-sign and "cent"-sign were on the same key and CP/67 had a convention that used (lowercase) "at"-sign (in line editing) for character delete and "cent"-sign for line delete. The TTY keyboard didn't have cent-sign ... so I mapped (been a number of years) "left" bracket.

Then in late '68 because of various difficiences in the mainframe 2702 terminal controller, four of us started a project to build the first mainframe PCM control unit using Interdata3s. Had to build our own channel attach card that attached the Interdata3 to the mainframe I/O channel. An emulated line-scanner was built in the Interdata3 that was targeted at supporting both dynamic line-speed recognition as well as dynamic terminal-type recognition (as part of the original TTY support in CP/67, I had expanded the existing dynamic terminal type recognition to TTY ... however 2702 had a difficiency that while the line-scanner could be changed for each line ... the hardware oscillator setting the line speed was hard wired).

random refs:
https://www.garlic.com/~lynn/submain.html#360pcm


Network Working Note                                    Steve Crocker, UCLA
RFC-6                                                   10 April 1969

CONVERSATION WITH BOB KAHN

I talked with Bob Kahn at BB&N yesterday.  We talked about code conversion
in the IMP's, IMP-HOST communication, and HOST software.

BB&N is prepared to convert 6, 7, 8, or 9 bit character codes into 8-bit
ASCII for transmission and convert again upon assembly at the destination
IMP.  BB&N plans a one for one conversion scheme with tables unique to the
HOST.  I suggested that places with 6-bit codes may also want case shifting.
Bob said this may result in overflow if too many case shifts are necessary.
I suggested that this is rare and we could probably live with an overflow
indication instead of a guarantee.

With respect to HOST-IMP communication, we now have a five bit link field
and a bit to indicate conversion.  Also possible is a 2-bit conversion
indicator, one for converting before sending and one for converting after.
This would allow another handle for checking or controlling the system.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Tue, 15 May 2001 21:01:00 GMT
Steve O'Hara-Smith writes:
Damn, and I thought stopping at 72 was to allow room for the 0000001 sequence number 0000002 0000003 How do you quote with punched cards ? 0000004

73-80 would be the sequence number 72 was the continuation col. (at least for some things) needed to stop at 71

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Pre ARPAnet email?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre ARPAnet email?
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Tue, 15 May 2001 21:09:49 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
You call that a "deficiency" but lots of IBM customers called it other names that probably shouldn't be repeated in a family-oriented newsgroup. <g>

Joe Morris


it was sufficiently deficient to prompt us to build our own channel attach card and our own control unit (thereby originating the PCM control unit business). one might also be tempted to claim that originating the PCM control unit business was at least partially responsible for the terrible baroque character of the PU4/PU5 (aka ncp/vtam) interface.

random ref:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Need explaination of PKI and Kerberos

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need explaination of PKI and Kerberos
Newsgroups: comp.security.misc
Date: Wed, 16 May 2001 13:46:02 GMT
"Philip Tanner" writes:
Could someone tell me what the differences are between these two security mechanisms? My understanding is Kerberos is meant to work within a single "domain", while PKI is targeted towards sending stuff to other sites across the internet. Both encrypt the data going across the wire. I'd also be interested in finding out if running either of these packages cause any problems that might be aggravating to the users. Also, what do sites that run multiple platforms (HP-UX, SunOS, Solaris, etc) do to implement Kerberos? From my investigations it seems like HP-UX 10.20 doesn't support it, SunOS doesn't support it, but Solaris does (regarding the exporting of filesystems using kerberos). Thanks in advance for any information.

Phil Tanner


both kerberos and PKI can be implemented in either single domain &/or cross domain. Both can support authentication and secrecy (i.e. encrypted transmission ... which basically comes down to key aggrement for the encryption).

issues about cross-domain operation for either kerberos or PKI come down to business process things like trust. i actually attended some sessions at MIT circa 88/89 discussing cross-domain kerberos ... and the basic problem wasn't so much technical ... but business trust issues.

many people when they think of PKI ... typically think of SSL. A "real" PKI is typically considered to have registration, certification, trusted-third parties, x.509 certificates, certificate management, revokation, etc. Currently operating SSL infrastructure only has a subset of what is considered (upper-case) PKI ... aka missing revokation and other characteristics required of a business infrastructure ... reference recent "Can I create my own SSL key" thread in comp.security.unix ng.

random refs:
https://www.garlic.com/~lynn/2001e.html#43
https://www.garlic.com/~lynn/2001e.html#46

another well known public key infrastructure (i.e. lower-case pki not upper-case) is PGP with support in many mail systems.

part of the issue is that "trust" is a somewhat ephemeral issue (especially cross-domain). For some people the existance of a tangable defined object like a x.509 certificate may provide a degree of added comfort ... not necessarily justified ... as in some of the SSL discussions:

https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Wed, 16 May 2001 14:02:11 GMT
jmfbahciv writes:
The problem with that edit trail is you only know who the last modifier was. TOPS-10 assigned an edit number to each "fix". The edit history could have a description that was not limited to 8 characters--human and otherwise.

Funny how some times we have "things" a certain way, but don't know why.

It's not "some"times. It's most of the time. Whenever I wanted to change a procedure, I always tried to find out why/how it got established in the first place so I could determine whether the problem that had been fixed by all of that old history would exist if I eliminated or changed the procedure. A lot of times changing one teensy tiny thing was the equivalent to a butterfly flapping its lips.


some ibm infrastructure would have 3-5 character module prepended to the numeric sequence number in cols. 73-80.

circa 70 or so, source update system for cp/67-cms was developed that would put update id/number in cols 64-71 (i.e. tail-end of the comment area) of the source code . Starting with VM/370 official "fixes" were assigned sequential fix-id number .... and the number would be placed in cols 64-71 of the source. I've seen some recent "fix-ids" in the 62,xxx range (some 62,000 fixes over approx. 30 year lifetime).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wireless Interference

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wireless Interference
Newsgroups: alt.folklore.computers
Date: Wed, 16 May 2001 16:14:02 GMT
Alexandre Pechtchanski writes:
The scientific name for this is "modems' mating call" ;-)

the protocol negotiation used to be much, much simpler ... basically dynamic speed recognition .... where you strobed signal rise/fall edges ... what we implemented in the 1st 360 PCM control unit; recent ref in the arpanet email thread

https://www.garlic.com/~lynn/2001e.html#53
https://www.garlic.com/~lynn/2001e.html#55

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Design (Was Re: Server found behind drywall)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Design (Was Re: Server found behind drywall)
Newsgroups: alt.folklore.computers
Date: Wed, 16 May 2001 20:56:41 GMT
Steve O'Hara-Smith writes:
One last point, on a different note, most engineering success comes from repeating a known solution. With software when the solution is known you don't repeat it (if you have any sense) you reuse it, as a result all new software is (or should be) an attempt to solve a new problem. Even if that problem is "How do we prevent these morons from screwing up this essentially simple task".

I got to repeat the joke about (univ) computer science looses its brain every 4-6 years in a couple talks ... one was on assurance at the intel developer's conference and the other was last week at the second HPCC workshop ... i.e. every 4-6 years there is a brain wipe and the same things get to be re-invented ... including the same bugs. There are a couple bugs that I believe I've had to fix at least three different times in the last 35 years in three totally different generation of systems.

random refs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aepay6.htm#erictalk
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Estimate JCL overhead

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Estimate JCL overhead
Newsgroups: bit.listserv.ibm-main
Date: Thu, 17 May 2001 03:18:47 GMT
jlender@HOTMAIL.COM (jim lender) writes:
We run about 5000 jobs weekly for a business application. Each job has an average of 10 steps, most of them running the same 5 programs. We're looking at improving the application's efficiency by running these 50000 steps in about 20 jobs, using REXX instead of JCL. The application programs themselves won't run any faster (that's another project) but we hope to make the whole process a lot more efficient by removing the largely unaccounted-for system overhead created by 5000 jobs, 50,000 steps and over 400k DD statements.

Before we start setting up new jobs and REXX execs, we'd like to have a rough idea of how much system resources we can expect to save. SMF tells us how much CPU and I/O time is used by the application programs themselves, but, since this won't change much, we're not really interested. What we don't know is how to calculate the largely hidden overhead that must be generated by the large amount of JCL currently used by the application. As far as we can tell, the overhead does not appear in any of the job-related SMF records, but we suspect it is significant in the following areas:


the way i did it long, long ago as undergraduate .... was run some series of jobs but do something like substituting iefbr14 or similar null operation ... and compare the resources for normal runs against the run with all the iefbr14s.

then do something similar with REXX execs for a equivalent series of "null" steps.

long ago and far away
https://www.garlic.com/~lynn/2000d.html#50
https://www.garlic.com/~lynn/99.html#81

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Estimate JCL overhead

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Estimate JCL overhead
Newsgroups: bit.listserv.ibm-main
Date: Thu, 17 May 2001 15:00:36 GMT
bdissen@DISSENSOFTWARE.COM (Binyamin Dissen) writes:
Why would that help?

The C/I time would be charged to Jes.


the other part of the process ... if you can't get the system to give up the appropriate numbers is run on otherwise idle (single) processor (real, lpar and/or virtual machine) and record utilization before & after the run of the dummy streams. difference is total hardware use for the system to executive process.

if getting the total counts before & after the run prooves too difficult there is a cludge shortcut that i've used a number of times over the years.

say the dummy job stream takes 5-10 minutes (i.e. E=10 minutes) elapsed time to run.

test is similar as before idle (single) processor (real, lpar and/or virtual machine).

create tight-loop compute bound background job that takes E+5 cpu cycles and runs in E+5 elapsed time (aka 5 minutes longer than the test run). start compute-bound in the background (worst priority you can assign) start the dummy job stream as soon as the cpu is pegged.

increase in the elapsed time for the tight-loop compute bound job is the cpu used by the system for processing dummy job stream.

previous post re: above from long ago and far away
https://www.garlic.com/~lynn/2001e.html#60
https://www.garlic.com/~lynn/2000d.html#50
https://www.garlic.com/~lynn/99.html#81

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Modem "mating calls"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Modem "mating calls"
Newsgroups: alt.folklore.computers
Date: Thu, 17 May 2001 17:47:01 GMT
Lars Poulsen writes:
The first DSP modem was the TeleBit TrailBlazer. Instead of using a few tones, it split the channel into 256 narrow tone bands, divided these dynamically between the two directions and sent symbols of many bits (up to 200 or so) in parallel at a low symbol rate (something like 7 symbols per second comes to mind). For UUCP, where the protocol was essentially half-duplex, this was very effective, allowing 9600 bps over dial-up links.

i still have my telebit trailblazer t2500 ... and talked to the company at the time about the encoding ... i thot i could find the manual that included description of the encoding ... but it isn't in the box with the modem so with a little alta-vista ... i found the attached at (this is the original trailblazer you reference)
http://www.getty.net/texts/modemtxt.txt
PEP: Packetized Ensemble Protocol is a proprietary method used by Telebit in their Trailblazer modem series. Like the HST, PEP modems will only connect at high speed with other PEP modems. PEP communicates at 20600 bps., the highest speed in general use. PEP is based on a multi-carrier technique, the transmission channel is divided into 512 independent, very narrow channels. The main advantage is that no receiver adaptive equalizer is needed because each channel is very narrow compared to the overall channel bandwidth. The modulation rate in each narrow channel can be changed somewhat independently. Trailblazer is different from many other modems in that the decision to fall back to lower speeds is built into the modem protocol, rather than controlled by the user's computer port. Traditional modulation systems would have to fall back in larger steps. But there are three problems:

page 41. The turn-around delay is very long compared to conventional modulation techniques because data must be sent in large blocks. A typed character may take as much as a half of a second to be echoed back to the system that sent it. As a result, the system is not the best for interactive online sessions.

2. The Trailblazer receiver cannot track carrier phase jitter, Instead of canceling out phase jitter, PEP can only respond by lowering throughput.

3. The ability to transmit at the maximum rate when subject to some types of channel impairment is considerably less than for conventional modems, HOWEVER, The multiple channel technique offers extremely good immunity to impulse noise (the most common) because the impulse energy is distributed over narrow channels.

Due to the better overall performance of PEP, and the better turnaround time of HST, US Robotics had captured a lot of the general high speed traffic in the PC world, and Telebit captured the majority of similar high speed traffic in the Unix world prior to V.42bis.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Modem "mating calls"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Modem "mating calls"
Newsgroups: alt.folklore.computers
Date: Thu, 17 May 2001 17:54:37 GMT
Anne & Lynn Wheeler writes:
the box with the modem so with a little alta-vista ... i found the attached at (this is the original trailblazer you reference) ^ not

Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Design (Was Re: Server found behind drywall)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Design (Was Re: Server found behind drywall)
Newsgroups: alt.folklore.computers
Date: Thu, 17 May 2001 20:57:39 GMT
swaim writes:
How about iterative development? That's what my group's been doing, with user representatives playing with our apps, and providing reaxsonably fast feedback. We seem to be converging on something deployable to production.

one of my approaches was not to do anything that didn't have a deliverable more than 3 months away .... i.e. in production where the rubber met the road with the users (larger projects broken into incremental deliverables every couple weeks ... in the case of the resource manager i could have incremental deliverables at internal datacenters ... but the 3 month elapsed time to run all the calibration and regression tests was something of an issue).

it also met that i would take a job in the IT support group rather than be in development or RSM ... i.e. it was lot easier to get hands on the real iron when it came time for deploying something new.

i do joke about during one period I would work first shift in bldg 28 at my "official" job supporting the computing center, 2nd shift in bldg 14 supporting disk engineering (for the fun of it), and 3rd shift in bldg 90 doing a special project for STL computing center in support of IMS development ... and periodically take-off to drive up to palo alto to install new operating system at HONE. In spare time I could do email and HSDT.

random refs:
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/subnetwork.html#hsdt
https://www.garlic.com/~lynn/subtopic.html#disk
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Stoopidest Hardware Repair Call?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stoopidest Hardware Repair Call?
Newsgroups: alt.folklore.computers
Date: Thu, 17 May 2001 23:15:28 GMT
Eric Sosman writes:
IBM rousted somebody out of bed and sent him to our little backwater. Upon arrival he assessed the situation, noticed that the light bezels were disarranged, restored them to their proper places, and Lo! the printer no longer complained about SYNC CHECK. To justify the whopping bill we were going to pay for an out-of- contract-hours emergency-response Service call, he even cleared the remaining problem by putting in a fresh box of paper ...

random ref:
https://www.garlic.com/~lynn/2001.html#3

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Fri, 18 May 2001 14:38:33 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
IBM's VM system (shipped mostly in source) implemented source changes by delta decks except at major release points, all driven by a control file that specified what deltas were involved. The result was that you got not only a forward edit transaction log, but a control log as well that provided a line for each of the delta files -- and those lines were also written to the object (hex) deck, so each deck was self-documenting for its edit history.

The procedure wasn't perfect, and never came anywhere close to the programming-like features of a makefile on a Unix box, but it sure was convenient for the sysprogs who worked with VM...and gave them one more item with which to tweak the nose of the MVS sysprogs.


IBM would ship accumulative monthly service called "PLC" (program level change) in both binary and source (i.e. all source changes since the last major release plus the corresponding binaries). "Release 3 PLC 15" would indicate the 15th (monthly) service distribution for Release 3.

The customer, to build a new kernel would "load" all the binaries into memory and the memory image would then be written to disk. The "loading" process produced a "load map" which consisted of all the associated source filenames as comments plus memory location address of all the symbols for each module. Associated filename comments in the loadmap would include date/time that the specific binary module was created followed by one line for each one of the associated source files (with date/time).

So in the mid-80s, I'm on a business trip to the Madrid Scientific Center. They have a project that is digitizing images of lots of ancient documents that were going to be made available as part of the 1492 anniversery.

While I'm there, I visit a local movie theater. The local movie theater inludes a 20 min. "short" produced at the university. In several scenes in the movie there is a wall of TV screens, all scrolling the same text across the screens at 1200 baud (around 2-6 lines/sec).

Not only do I recognize that the text is a VM/370 loadmap, but the VM/370 Release and PLC based on the names and dates of the change files (which is just barely able to catch at the speed the text is scrolling).

Many of the change files have a eight character filename suffix of the form letter, five digit number, three letters. For most fixes and/ many changes, the suffix had an incrementing number that started at zero and was incremented for each new fix/change. This started nearly 30 years ago, I recently noted that the new number fields were in the range of 62k. I'm not sure what they plan on doing when the roll-over six digits.

The same suffix also appears in the comment field of each source line for that change file
https://www.garlic.com/~lynn/2001e.html#57

random refs:
https://www.garlic.com/~lynn/99.html#9
https://www.garlic.com/~lynn/2000g.html#36

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Fri, 18 May 2001 14:46:32 GMT
Anne & Lynn Wheeler writes:
Many of the change files have a eight character filename suffix of the form letter, five digit number, three letters. For most fixes and/

finger slip

of the form: letter, five digit number, two letters ... not three.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Estimate JCL overhead

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Estimate JCL overhead
Newsgroups: bit.listserv.ibm-main
Date: Fri, 18 May 2001 15:03:23 GMT
brad@TOSCINTL.COM (Brad Taylor) writes:
Someone mentioned in a subsequent post that this would be a good candidate for batch pipes. This may be true, but it is application dependent. Since the savings with batch pipes comes from the parallelization of processes, however they only time the two processes run in parallel is during the output/input processing of the writer and reader of the pipe. If most of the process is consumed by application logic with little i/o then it is not as good as a candidate, whereas if the output processing could significantly overlap with the input processing of the subsequent step, then you could get even more significant reductions.

big win for pipes is when a majority of the I/O output from one step is to a temp file that is consummed by the next step. If there is a whole series of these steps ... where majority of the I/O is sequentially passing data from one step to the next ... then pipes can show a big win by not actually having to do the intermediate I/O (each subsequent process in the sequence consumes output as soon as it is generated). This does have some increase in dispatching overhead which is offset by potential big decrease in I/O processing.

Elapsed time can be decreased by eliminating serialization in each step involved in putting/retrieving intermediate/temp data to/from disk. Elapsed time may be further decreased if there is sufficient spare resources that get consumed by having larger number of operations running concurrently.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Fri, 18 May 2001 15:45:27 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
The procedure wasn't perfect, and never came anywhere close to the programming-like features of a makefile on a Unix box, but it sure was convenient for the sysprogs who worked with VM...and gave them one more item with which to tweak the nose of the MVS sysprogs.

there were also a large percentage of various MVS components that were developed on VM/CMS ... using the VM procedures ... which then had to be post-translated into the MVS infrastructure (which resulted in some mis-match). For instance, JES2 source development used to be a totally CMS environment (I don't know if it still is).

the original implementation was done crica 1970 all in "script" (aka EXEC) processes using basic CMS components originally developed in 67/68 time-frame. Even when the additional features provided by the EXEC processes were later merged as features into the standard command components there wasn't any substantive functional improvements (possibly one of the characteristics of legacy system; if it ain't broke, don't fix it).

misc. refs at Melinda's web page
http://www.leeandmelindavarian.com/Melinda/
http://www.leeandmelindavarian.com/Melinda#VMHist

VM and the VM Community: Past, Present, and Future (includes early stuff on time-sharing, CTSS, multics, etc)

Development of 360/360 Architecture: A Plain Man's view

What Mother Never Told You About VM Service (a much more detailed description of the VM process).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Modem "mating calls"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Modem "mating calls"
Newsgroups: alt.folklore.computers
Date: Sat, 19 May 2001 00:18:27 GMT
jata@aepiax.net (Julian Thomas) writes:
It's also interesting to observe that applications (credit card authorization) that do not require high bandwidth still run at 300 baud, since the modems don't go through the "mating ritual" . Next time you hear a credit card terminal making a call, listen carefully!

there was a study about upgrading terminals to 28.8 because they are looking at transferring more data. there is an objective of doing the transaction in 7 seconds. after a lot of tests they found that the "mating ritual" (for 28.8) was taking 20-30 seconds ... amount of data will have to increase significantly to break even with the "mating ritual" overhead.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

line length (was Re: Babble from "JD" <dyson@jdyson.com>)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: line length (was Re: Babble from "JD" <dyson@jdyson.com>)
Newsgroups: alt.folklore.computers
Date: Sat, 19 May 2001 16:03:22 GMT
glass2 writes:
Didn't you ever have to use CLEAR (Control Li+rary Environment And Resource system)? That was THE library system for MVS, at least, internally. It would do anything and everything for (to?) you. Plus, it had an interface so that you could access it from a VM system.

Seriously, though, it's one of the better library systems that I've used. While it's a real pain to set up and administer, it allows lots of flexability. You can use it just as a source code library, or you can have it do all of the compiles, link-edits, builds, along with grouping and packaging release materials and archive materials. Plus, one of the features that it had that most other contemporary library systems were lacking was the ability for developers to do development in parallel without tripping over each other. It had a process to "merge" a "delta" into the base.


i never had to use clear myself. I had heard tales in the '70s of trouble that JES2 group had with mapping back&forth between CMS<->CLEAR tho.

the "original" (cp/67) script/EXEC implementation had a process to merge parallel work/updates (done by MIT co-op student) which seemed to work pretty well. That feature got dropped in translation to VM/370.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Stoopidest Hardware Repair Call?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stoopidest Hardware Repair Call?
Newsgroups: alt.folklore.computers
Date: Mon, 21 May 2001 23:39:40 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
I've heard of assorted other problems with lasers and fibre-optics; one company I worked for didn't wire up one of the offices, since they decided to use a newfangled laser link since it was just across the road. Problem was that the thing ended up out of alignment every time a lorry went past. Another one, although I remain to be convinced of this, is a company which runs its fibre-optics alongside railway lines, and apparently they get huge amounts of packet loss every time a train goes by because of the vibration (again)

we did infrared modems between the tops of two buildings (that crossed a major hiway ... this was possibly at a time when getting a permit for laser would have been difficult ... all the stuff about shinning into people's eyes). the alignment problem was the uneven heating effects of the building by the sun as it moved across the sky (resulting in expansion/contraction of different sides of the buildings). there was a lot of fine-tuning the placements of the modems in an attempt to compensate for building lean because of variation in thermal expansion caused by the sun during the course of the day.

Before they were installed it was predicted the major problem was large packet loss during heavy rain and/or snow, which never really materialized. There were a few packets lost during a blinding, white-out snow storm ... during which people were unable to get into work, but the rest of the time things ran very smoothly (once things were worked out to compensate for uneven thermal expansion of the buildings). However, nobody had predicted the sun-induced building leaning problem.

ranndom ref:
https://www.garlic.com/~lynn/94.html#23
https://www.garlic.com/~lynn/2000c.html#65

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CS instruction, when introducted ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CS instruction, when introducted ?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 22 May 2001 13:33:58 GMT
"GerardS" writes:
Not exactly a topical question, but when did IBM introduce the CS (compare and swap) instruction (what year) ?

Was there any PRPQ instructions earlier then the CS instruction (concerning spin locks and the such, but including SIGP).

I have an IBM Reference Summary card dated Novemember, 1976 (GX20-1850-3) which has the CS and CDS instructions in it, so I assume it was earlier than that.

Gerard S.


CAS was done by C.A.S. at the cambridge science center (aka his initials) .... based on his work on fine grain locking for multiprocessors in the late '60s and early '70s ... (convention similar to GML; precursor to SGML, HTML, XML, etc ... were initials of three different people at CSC).

Trick was getting it adopted by 370 architecture in POK. Requirement given CSC & C.A.S. for inclusion into 370 architecture was coming up with a uniprocessor application ... which spawned the programming notes for using the instruction for managing data in multi-threaded (but not necessarily multi-processor) operation. In the process, a word & double word version were generated ... changing the mnemonics to CS & CSD.

Now, hard part ... I use to have the 370 POP where it first appeared ... but possibly '73 (but it has been long time, i vaguely remember some of the architecture meetings in POK and of course the CSC work, 158mp & 168mp were announced 3/73). I don't remember any PRPQ version ... it went directly into the 370 architecture once the multi-threaded scenerio was invented. The precursor on the 360/67/65 multiprocessors was test & set ... TS.

random refs:
https://www.garlic.com/~lynn/93.html#14 S/360 addressing
https://www.garlic.com/~lynn/94.html#02 Register to Memory Swap
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#45 SMP, Spin Locks and Serialized Access
https://www.garlic.com/~lynn/97.html#19 Why Mainframes?
https://www.garlic.com/~lynn/98.html#8 Old Vintage Operating Systems
https://www.garlic.com/~lynn/98.html#16 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#88 FIne-grained locking
https://www.garlic.com/~lynn/99.html#89 FIne-grained locking
https://www.garlic.com/~lynn/99.html#176 S/360 history
https://www.garlic.com/~lynn/99.html#203 Non-blocking synch
https://www.garlic.com/~lynn/2000e.html#25 Test and Set: Which architectures have indivisible instructions?
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#33 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CS instruction, when introducted ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CS instruction, when introducted ?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 22 May 2001 14:45:17 GMT
Anne & Lynn Wheeler writes:
CAS was done by C.A.S. at the cambridge science center (aka his initials) .... based on his work on fine grain locking for multiprocessors in the late '60s and early '70s ... (convention

... CS is the only mnemonic that i know of that started out as somebody's initials, aka the original task started out with CAS's initials and then come up with a phrase that matched his initials.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Apology to Cloakware (open letter)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Apology to Cloakware (open letter)
Newsgroups: sci.crypt
Date: Tue, 22 May 2001 15:36:38 GMT
"Douglas A. Gwyn" writes:
Of course, without preventive maintenance, there will eventually be a much bigger catastrophe.

The causes of California's power crisis are explained very nicely in the past two issues of the Intellectual Activist. The story isn't simple, but in essense the problem was brought on by bad regulation and legislation. It could serve as a case study in the effects of rabid environmentalism.


i remember as a kid (out west) seeing the track maintenance crews going thru every summer ... replacing ties.

after graduation, i took a job on the east coast. there were ties you could stick your finger into. people complained that sections they remember as having train speed limits of 60-80 mph were down to 15. there was one section that was called the box-car grave yard where the freight train speed limit was 5 mph (and there were still regular derailments). It had been 20 years or more since the tracks had any PM.

i seem to remember reading an article a year or so ago in ?? (possibly SJMN) about how the california PUCC was doing a tremendous job for the consumer by preventing the electronic companies from signing any long-term power supply contracts; that the PUCC had forced the california companies into only signing short-term contracts (in effect forcing them into the spot market). This worked well when there was a large surplus on the spot-market.

One problem was that it (at least) eliminated incentive for power produces to build new power production facilities (in part, much easier to get construction loans &/or float bonds for major power plant contruction projects if you have long term contracts in your hand that demonstrate demand for the new facilities).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Stoopidest Hardware Repair Call?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stoopidest Hardware Repair Call?
Newsgroups: alt.folklore.computers
Date: Tue, 22 May 2001 18:18:38 GMT
Terry Kennedy writes:
Back in the days of the NSFnet, BBN used microwave for some of their Internet links. Being on the other end of the link from the useful parts of the Internet, we would experience high packet loss during rain/fog/ snow.

arpanet, csnet, nsfnet-I, nsfnet-II?

nsfnet-I backbone contract was merit, ibm, mci. They used ibm PC/RTs as routers with special 440kbit/sec link adapter boards ... going into IDNX switch that aggregated three 440kbit/sec links into a single T1 (1.544) out thru MCI trunks. Most/much of MCI backbone was terrestrial microwave (c-band) which is prone to some rain fade (but not nearly as bad as Ku or Ka bands). The MCI relay stations were on the order of tens of miles (not hundreds of feet in the case of infrared modem between two buildings with highway between).

as a happen chance the buildings using the infrared modem were not all that far from one of the NSFNET-I backbone sites at NCAR. The backbone room had walls of shelves supporting the PC/RTs in psuedo "rack-mount".

For the infrared T1 modem application we had multiplexor that devoted most of the T1 to data but had split off a 56kbit side channel that could be dedicated to a pair of Fireberd bit-error-testers that could be run constantly to monitor & record signal quality (bit error rate). I connected the Fireberd rs232 printer output port to RS232 port on ibm/pc and had written turbo pascal application to log the fireberd output as well as generate various summary reports.

I don't know if it affects packet-loss ... but there is a "campus" T3 collins digital radio system in south san jose. One of the links has a relay tower on the hill above bldg. 90 that sights to roof of bldg. 12. I've noticed that a number of people get "hits" on their radar detectors when crossing close to the path of the signal on the elevated section of 85 (just north of the 85/101 intersection).

random ref:
https://www.garlic.com/~lynn/2000e.html#10
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Apology to Cloakware (open letter)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Apology to Cloakware (open letter)
Newsgroups: sci.crypt
Date: Wed, 23 May 2001 00:31:26 GMT
jgfunj@vgrknf.arg (wtshaw) writes:
This has all to do with computer security, a requirement for good crypto; when we see less than inspired planning, dead end programming makes patch after patch necessary and the result is a shambles no one can effectively manage or depend on.

and/or at least good authentication and information security

random ref ... The thread between risk management and information security
https://www.garlic.com/~lynn/aepay3.htm#riskm
https://www.garlic.com/~lynn/aepay3.htm#riskaads

from above, the power thing still pales in comparison to the S&L.
""To date the last reported dollars I have seen for each one of us to perform our refunding of the banks and S&L's exceeds 100K per person. Whether you like it or not, in a rather benign interest rate environment you will pay over 100K in your lifetime of taxpayer dollars to pay for this bailout. The dollars are so high that they are carried as an off balance sheet number so as not to capsize the US budget or cause attention. At one point they exceeded $ 1 trillion. This is what I mean when I say that ALL of the moneys gained by individuals in the asset appreciation (real estate) of the 70' & the 80's went in one pocket and the pay-out of the costs for the S&L industry came out of the other. The result - a zero, if not negative, sum game. The horrifying part of all of this was that it happened over a very benign interest rate cycle. Institutions were toast overnight because of a short term rate spike. Today prevention and anticipation are the order of the day and the keys to good regulations.""

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

digital signature and certificates in xml

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: digital signature and certificates in xml
Newsgroups: comp.security.misc
Date: Wed, 23 May 2001 15:19:37 GMT
Andreas Krügel writes:
i am looking for information about the use of digital signatures and certificates in xml can anybody list urls

joint w3/ietf XML Digital Signatures


http://www.ietf.org/html.charters/xmldsig-charter.html
https://web.archive.org/web/20020202131149/http://www.ietf.org/html.charters/xmldsig-charter.html
http://www.w3.org/Signature/

somewhat related

http://www.w3.org/TR/NOTE-SDML/
http://www.w3.org/DSig/Overview.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

digital signature and certificates in xml

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: digital signature and certificates in xml
Newsgroups: comp.security.misc
Date: Wed, 23 May 2001 15:23:40 GMT
Anne & Lynn Wheeler writes:
somewhat related

http://www.w3.org/TR/NOTE-SDML/
http://www.w3.org/DSig/Overview.html


also ... attached gives an example of signed X9.59 payment object in tagged format (following somewhat the FSML/SDML model)

https://www.garlic.com/~lynn/8583flow.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Mind of War: John Boyd and American Security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Mind of War: John Boyd and American Security
Newsgroups: alt.folklore.military
Date: Wed, 23 May 2001 19:56:35 GMT
John Boyd's biography is out this month (some people have been waiting at least 20 years for this).

https://www.amazon.com/exec/obidos/ASIN/1560989416/qid%3D982672335/sr%3D1-2/ref%3Dsc%5Fb%5F2/107-1178249-9432548
http://search.borders.com/fcgi-bin/db2www/search/search.d2w/Details?&mediaType=Book&prodID=51998530
http://shop.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=48AIUOE4N9&mscssid=HGJJJPL1NBL29GLAKTN2XNMDKHK59JM6&isbn=1560989416

some reviews:


http://www.belisarius.com/reviews_of_the_mind_of_war.htm
https://web.archive.org/web/20010606221622/http://www.belisarius.com/reviews_of_the_mind_of_war.htm

some of the above:
As a founder of the Military Reform caucus in Congress, I know John Boyd to be the true father of military reform in America. More than a biography, and a superb one, this book is a well deserved tribute to an extraordinary patriot who combined intellectual rigor, courage and moral purpose in remarkable degrees. Only in an age of self-promotion would this exceptionally strong, but quiet, man be so little known. Instead he deserves to be honored as a genuine hero. For that is what he was.

Former Senator Gary Hart (D, CO)

John Boyd was one of the greatest military and strategic thinkers of our time. His story and brilliance are superbly captured in Grant Hammond's excellent book. This is a must read for all those that want to understand how to think about war and to learn about a true American patriot.

Gen Anthony Zinni (USMC, Ret.)
Former CINC, USCENTCOM


random refs:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Passwords

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Passwords
Newsgroups: bit.listserv.vmesa-l
Date: Wed, 23 May 2001 14:18:39 -0600
At 01:14 PM 5/23/2001 -0700, you wrote:
I've looked at several books today, and discussion in the VM Planning and Administration is high on NOLOG, AUTOONLY and LBYONLY, and high on the syntax of the userid, but the syntax of the password is pretty much missing. It says:

specifies a 1- to 8-character password that a user enters during the log on procedure.

I'd be willing to live without (space) and could be convinced on a couple other characters too, but given the detail of userid

alphabetics
A through Z
numerics
0 through 9
others
@ # $ _ (underscore) - (hyphen)

it looks to me like the docs think that it should accept lowercase alpha for password, plus lots of other characters.


of some possible interest
https://www.garlic.com/~lynn/2001d.html#52

some background
https://www.garlic.com/~lynn/2001d.html#61
https://www.garlic.com/~lynn/99.html#52

and a somewhat different solution to the authentication issue
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Mind of War: John Boyd and American Security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mind of War: John Boyd and American Security
Newsgroups: alt.folklore.military
Date: Thu, 24 May 2001 03:38:02 GMT
velovich@aol.com.CanDo (V-Man) writes:
Now, Hart might be a good judge of character, but heroes risk something - that is what makes them special. Boyd did NOT risk much by publishing his theories.

no, but ... small excerpt from the following ... nearly a whole lifetime of similar stories ... people attempting to court martial him for one thing or another; getting him transferred to alaska; bared for life from the pentagon; etc. I was told a slightly different version of the following ... but it is representive.



http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
The Air Force, being the Air Force, tried to court-martial Boyd for stealing the computer time but it could not come up with the evidence; in the end, investigators found only four hours of stolen time. When confronted, the Mad Major blew cigar smoke in the chief inspector's face and explained calmly how he had stolen the rest. He then showed the inspectors a thick file of letters, which documented how his requests for computer time had been refused repeatedly by the bean counters at Eglin and the autocrats at Wright Patterson. He suggested they call Headquarters, Tactical Air Command, and tell the Commanding General that Boyd was about to be hosed for uncovering better combat tactics.

with regard to the book's author ... Grant Hammond, Director Center on Strategy and Technology ...


Grant T. Hammond, Director
AU Center for Strategy and Technology
Air War College
325 Chennault Circle
Maxwell AFB
Montgomery, AL 36112
(334) 953-6996 (DSN 493-6996)
Email: Grant.Hammond@maxwell.af.mil


http://www.au.af.mil/au/awc/awcgate/awccsat.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Mind of War: John Boyd and American Security

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mind of War: John Boyd and American Security
Newsgroups: alt.folklore.military
Date: Thu, 24 May 2001 12:58:55 GMT
velovich@aol.com.CanDo (V-Man) writes:
Lookit... ALL I was trying to point out is that there are SOME WORDS that have, for specific reasons, a NEGATIVE IMPACT on otherwise intelligent people. "Patriot" is one of them.

well, in this case there is more than a little of it ... another ...

John Boyd was a remarkable patriot whose intense commitments to learning and teaching the lessons of history changed American military doctrine and made Desert Storm possible. This study is an invaluable contribution to a little known part of our military history.

Former Speaker of the House
Newt Gingrich (R, GA)


... i always saw him as trying to do the right thing ... and more than one would like think getting condemned for it. that he accomplished as much as he did in his lifetime, is a attribute to not only his tenacity but the few people that knew what he was doing and supported him.

he sought no reward for it. i sponsored his talks a number of times and all he wanted was out of pocket expenses in return.

...

I worked with John Boyd, often by phone for 30 years. He was one of my mentors, and was always generous with his time. He had an enormous influence on all the services. We talked often before and after Desert Storm. He always had that fire in his gut and passion in his voice, even when he knew he was dying of cancer.

I consider him a great American and feel his loss. I often begged him to publish his briefing on war, but he said the USMC had done it for him as course notes. I am thrilled someone wrote a book about him; he deserves it.

A Toast to the Greatest Eagle.

Dr. Paul Berenson,
Former Scientific Advisor to SACEUR and the CG TRADOC


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/


previous, next, index - home