List of Archived Posts

2005 Newsgroup Postings (12/04 - 12/27)

PGP Lame question
PGP Lame question
PGP Lame question
PGP Lame question
Fast action games on System/360+?
What ever happened to Tandem and NonStop OS ?
Fast action games on System/360+?
Reservation System Design
PGP Lame question
PGP Lame question
simple-minded historical "mips"
simple-minded historical "mips"
3vl 2vl and NULL
AMD to leave x86 behind?
AMD to leave x86 behind?
Fast action games on System/360+?
AMD to leave x86 behind?
What ever happened to Tandem and NonStop OS ?
XBOX 360
Identity and Access Management (IAM)
AMD to leave x86 behind?
3390-81
Channel Distances
Channel Distances
AMD to leave x86 behind?
Fast action games on System/360+?
RSA SecurID product
RSA SecurID product
Fast action games on System/360+?
AMD to leave x86 behind?
3vl 2vl and NULL
AMD to leave x86 behind?
AMD to leave x86 behind?
PGP Lame question
PGP Lame question
AMD to leave x86 behind?
Mainframe Applications and Records Keeping?
Mainframe Applications and Records Keeping?
Mainframe Applications and Records Keeping?
Mainframe Applications and Records Keeping?
POWER6 on zSeries?
Mainframe Applications and Records Keeping?
Mainframe Applications and Records Keeping?
POWER6 on zSeries?
POWER6 on zSeries?
IBM's POWER6
Channel Distances
The rise of the virtual machines
POWER6 on zSeries?
Channel Distances
Channel Distances
Channel Distances
OSI model and an interview
OSI model and an interview
OS/2 RIP
OSI model and an interview
OSI model and an interview
IPCS Standard Print Service
Command reference for VM/370 CMS Editor
Command reference for VM/370 CMS Editor
1970s data comms (UK)
DMV systems?

PGP Lame question

From: lynn@garlic.com
Subject: Re: PGP Lame question
Date: Sun, 04 Dec 2005 18:56:55 -0800
Newsgroups: sci.crypt
Ari Silverstein wrote:
You may only extract that the Sender is sending from a particular computer, or email account, but there is no authentication that says John Doe, age 50, of OnePersonTown, Vermont *was* the actual Sender, typed and released the email to the Recipient.

as per previous post
http://www.garlic.com/~lynn/2005t.html#52 PGP Lame question

from 3-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor

any digital signature verification implies:

1) the message/document hasn't changed since the digital signature

2) something you have authentication, aka the sender has access to and use of the corresponding private key.

to the extent you can put some level of confidenence in that authentication ... may require some certification of the integrity and protection surrounding both the private key and the digital signing operation; aka having certification that a private key is bound to an hardware token of known integrity characteristics ... and that possibly the digital signature operation involved a finread certified device (and, in fact the finread certified device also co-signed the message)
http://www.garlic.com/~lynn/subintegrity.html#finread

one of the issues is to try and avoid polluting simple and straight-forward authentication operations with horribly privacy invasive heavy duty identification infrastructure. this also may stray into a completely different area from straight-foward authentication .... not only enormously heavy duty and privacy invasive identification operations ... aka confusing authentication and identification ... but also confusing authentication (digital signatures) with reading, understanding, agreeing, approving, and/or authorizing (human signatures). part of this may be semantic confusion ... with both the term digital sigatnure and the term human signature containing the word signature.
http://www.garlic.com/~lynn/subpubkey.html#signature

PGP Lame question

From: lynn@garlic.com
Subject: Re: PGP Lame question
Date: Sun, 04 Dec 2005 19:32:29 -0800
Newsgroups: sci.crypt
vedaal wrote:
if the recipient trusts the keysigner's key, then there is authentication that the keysigner composed the message

what is *not* clea, is 'whom' the keysigner really sent the message to, as the first receiver can decrypt, separate, and forward, and re-encrypt a verifiable signed message to 'anyone' else

the only hint that this is being done, is a discrepancy between the signing time and re-sending time, which can be minimized if the original receiver has this planned out and resends as soon as possible after receivng the original message


treat it as a form of replay attack ... i.e. the same signed contents beingused in multiple different contexts. authentication isn't countermeasure to replay attack. possibly the message being signed is a financial transaction ... say x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

typical countermeasure would include both the to-field and the date/time (or some other mechanism for differentiation) in the contents being signed.

note, the x9a10 working group was given the requirement that x9.59 preserve the integrity of the financial infrastructure for all retail transactions. one of the scenarios was the client originating a x9.59 transaction via email .... and it having it work correctly/reliably in a single round-trip (client-to-merchant, merchant-to-financial-infrastructure ... and return).

PGP Lame question

From: lynn@garlic.com
Subject: Re: PGP Lame question
Date: Mon, 05 Dec 2005 11:34:35 -0800
Newsgroups: sci.crypt
for some slight topic drift ... thread in crypto mailing list on "broken ssl domain name trust model" where solution is to implement a webserver authentication paradigm along the lines of the pgp authentication model:
http://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#23 Broken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#24 Broken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#25 Broken SSL domain name trust model

in PGP the client-side process of authentication of every message (between sender identifier and digital signature) and the process of establishing any trust between the identifier and any identification information meaningful to the user are separated out. the authentication process is done repeatedly and can be made automatic. the trust process is done much less frequently and the user is expected to be more involved.

in SSL domain name trust model ... two things have happened:

1) the authentication process and the trust process have all been collapsed into a single process that happens for every SSL operation. it is cumbersome for the end-user to be involved on every such SSL operation ... and so they have been both automatic and therefor the user punts on ever being involved in the trust process

2) the identifer binding between authentication and trust is the domain name in the URL. not only has the trust operation for SSL been pushed below most individual's awareness ... but the URLs themselves have been become so ingrained in every operation ... that most URLs are disappearing from end-user awareness. as a result you can have perfectly valid URL/ssl trust & authentication activity going ... with the URL, the domain name, and the certficate all being valid and provided by the attacker ... and the end-user still being compromised by a fraudulent site.

the issue in the original SSL trust model ... past postings describing working with small client/server startup on applying their technology to doing server financial transactions
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

was that at least one component was provided by the end-user. however, as the web and URL use evolved ... the enduser is (vast majority) now not directly providing any of the pieces of the SSL domain name trust components ... they can all be provided by the attacker.

The PGP model applied to trusted webserver is along the line of secure bookmarks ... using the bookmark metaphor as the repository for public keys which have been associated with some identier and for which the user has been involved in some piece of equating the identifier to some trust characteristic.

The PGP key repository, the secure bookmarks, and the existing browser repository of trusted certification authority public keys all are roughly equivalent implementation paradigms. the current browser repository of trusted certification authority public keys (the trust "root" for certification validation) tend to be prebuilt by the browser manufactur and users pay little or no attention to their existance.

PGP Lame question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: 5 Dec 2005 13:20:32 -0800
Newsgroups: sci.crypt
Subject: Re: PGP Lame question
Will Dickson wrote:
Another consideration might be what Schneier calls the "Horton principle" - you sign what you mean (the plaintext) rather than what you say (the ciphertext). Further in many situations authenticity is more important than privacy, so better to have the encipherment protect the signature rather than vice-versa.

there is an example here in discussion about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

one of the things recognized earliy by the x9a10 group (given requirement to preserve the integrity of the financial infrastructure for all retail payments) in its work on x9.59 standard
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

was the enormous risks represented by data breaches and account fraud ... effectively since transaction logs were required by large number of business processes and that transaction fraud can be performed just by harvesting information from existing information (either transaction skimming and/or transaction log data breaches).

x9.59 standard included two business rules:

1) x9.59 transactions have to be strongly authenticated
2) information skimmed/harvested from x9.59 transactions could not be used in non-authenticated/non-x9.59 transactions.

this is my oft repeated diatribe about even if the planet were buried under miles of crypto .... it still wouldn't be sufficient to stop the fraudulent transactions from skimming and data breaches (in part because the transaction logs are still required to be in the clear for a large number of related business processes) .... while some simple authentication business rules ... makes all the skimming and data breaches useless for performing fraudulent transactions.

misc. past postings on account fraud and skimming/harvesting
http://www.garlic.com/~lynn/subintegrity.html#harvest

Fast action games on System/360+?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast action games on System/360+?
Newsgroups: alt.folklore.computers
Date: Tue, 06 Dec 2005 19:12:53 -0700
"WM" writes:
I programmed some interactive stuff on a 2250 vector display attached to a 360/44. About $3M worth of game machine.

I just drew a small box that you could drag with the lightpen to draw images. You could easily go too fast for the machine to keep up and the flicker as it calculated and refreshed the image was awful. Still, it was fun at 3 AM on a winter night in 1969.


2250m4 ... was a 2250+1130. space wars was ported to the 2250m4/1130 at the science center (late 60s) ... my kids would periodically get to come in on weekends and play it. two-person ... 2250 keyboard was divided in half and various keys controlled movement, firing, etc functions.

the person responsible for rexx did a multi-user spacewar game (late summer, 1980) for vm/cms ... worked both locally and remotely (over network) ... just had to have server/demon (UMPIRE) running for connections. 3270 keys controlled various functions ... inbound to the server was individual controls ... outbound to all the players were 3270 screen update of everything going on.

client -> server commands were relatively straight-foward. fairly early, somebody wrote a bot that would consistently beat real players (since it was re-acting and sending commands faster than human players). the game was then modified so that there was non-linear increase in energy consumption as interval between commands dropped below a threshhold (somewhat leveled playing field between humans and bots).

a little bit of the (standard) client routine:


• PROCESS OPT(TIME),F(I),LMSG,MAR(2,72,1);
 /* MSG HANDLER MK 2 */
 MFF: PROC OPTIONS(MAIN) REORDER;
 /*********************************************************************/
    DCL RED BIT(1) STATIC INIT('0'B);     /* CONDITION RED */
    DCL YELLOW BIT(1) STATIC INIT('0'B);  /* CONDITION YELLOW */
    DCL PDIRE BIT(1)  STATIC INIT('0'B);  /* PENDING DIRECTION CHANGE */
    DCL DEAD  BIT(1)  STATIC INIT('0'B);  /* WE HAVE BEEN DESTROYED   */
    DCL NUMS CHAR(10) STATIC INIT('0123456789');
    DCL (I,J,K,L,M,N) FIXED BIN(15);      /* TEMPS  ... */
    DCL (II,JJ,KK) FIXED BIN(31);
    DCL (TI,TJ,TK) FLOAT;
    DCL (SUBSTR,ADDR,MAX,MIN,ABS,INDEX,COSD,SIND,ATAND,PLIRETV) BUILTIN;
    DCL (TIME                                                 ) BUILTIN;
    DCL C9 CHAR(9);

    DCL SMTIME   EXT ENTRY OPTIONS(ASSEMBLER INTER RETCODE);
    DCL TICK     FIXED BIN(31);                       /* CURRENT TIC */
    DCL SEVTIME  EXT FIXED BIN(31);                   /* EVENT CTR.  */
    DCL SMWAIT   EXT ENTRY OPTIONS(ASSEMBLER INTER RETCODE);
    DCL SMMSG    EXT ENTRY OPTIONS(ASSEMBLER INTER RETCODE);
    DCL SEVMSG   EXT FIXED BIN(31);    /* MESSAGE EVENT COUNTER      */

    DCL 1 PARM STATIC,                 /* PARAMETER STRUCTURE         */
         2 MSGHANDLE CHAR(2),          /* TWO CHARACTER CODE: SEND    */
                                       /*   MSGS WITH THIS CODE.      */
                                       /* ALSO, WHEN 'OPEN', ONLY     */
                                       /*   HANDLE MSGS WITH THIS.    */

         2 MSGHEADER,                  /* UNIQUELY IDENTIFIES MESSAGE */
                                       /*   SOURCE/DESTINATION        */
            3 MSGTYPE CHAR(2),         /* 'L ' OR 'R ' (LOCAL/REMOTE) */
            3 USERID  CHAR(9),         /* USERID + ' '                */
            3 NODEID  CHAR(9),         /* NODEID + ' ' IF REMOTE      */

         2 MSGLENGTH FIXED BIN(15),    /* LENGTH OF DATA              */
         2 MSGDATA CHAR(130),          /* DATA FIELD OF MESSAGE       */
         2 FENCE CHAR(1);
    FENCE=' ';

    DCL MSGVAR CHAR(130) VAR BASED(ADDR(MSGLENGTH));

 DCL SMFSD EXT ENTRY OPTIONS(ASSEMBLER,INTER);
 DCL SEVCON EXT FIXED BIN(31);
    DCL LINE(24) CHAR(80);
    DCL 1 FPARM STATIC,
        2 ADDRESS FIXED BIN(15),     /* SCREEN ADDRESS: USE -1 FOR  */
                                     /*   VIRTUAL CONSOLE           */
        2 ROW     FIXED BIN(15),     /* CURSOR ROW                  */
        2 COL     FIXED BIN(15),     /* CURSOR COLUMN               */
        2 ATTN    CHAR(4),           /* 'PF1 ' ETC                  */
        2 WRITEFLAGS,
           3 CLEAR  BIT(1),          /* CLEAR BEFORE WRITE          */
           3 ALARM  BIT(1),          /* SOUND ALARM                 */
           3 LOCK   BIT(1),          /* LEAVE KEYBOARD LOCKED       */
           3 EXWCC  BIT(1),          /* EXPLICIT WCC                */
           3 PAD    BIT(4),
        2 READFLAGS,
           3 RDATTN BIT(1),          /* READ ATTENTION ONLY         */
           3 RDCURS BIT(1),          /* READ ATTN+CURSOR ONLY       */
           3 RDIMM  BIT(1),          /* READ IMMEDIATE              */
           3 RDUC   BIT(1),          /* READ WITH UPPERCASE TR      */
           3 RDMAP  BIT(1),          /* MAP PFK'S                   */
           3 RDMOD  BIT(1),          /* USE READ MODIFIED           */
           3 PAD    BIT(2);

 ADDRESS=-1; ROW=24; COL=80; ATTN='???';
 READFLAGS ='0'B;
 WRITEFLAGS='0'B;
 CLEAR='1'B; RDATTN='1'B; RDMAP='1'B;

... snip ...

misc. past postings mentioning spacewar.
http://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
http://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
http://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001f.html#10 5-player Spacewar?
http://www.garlic.com/~lynn/2001f.html#12 5-player Spacewar?
http://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?
http://www.garlic.com/~lynn/2001f.html#14 5-player Spacewar?
http://www.garlic.com/~lynn/2001f.html#51 Logo (was Re: 5-player Spacewar?)
http://www.garlic.com/~lynn/2001h.html#8 VM: checking some myths.
http://www.garlic.com/~lynn/2001j.html#26 Help needed on conversion from VM to OS390
http://www.garlic.com/~lynn/2002i.html#20 6600 Console was Re: CDC6600 - just how powerful a machine was
http://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years
http://www.garlic.com/~lynn/2002o.html#17 PLX
http://www.garlic.com/~lynn/2002p.html#29 Vector display systems
http://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
http://www.garlic.com/~lynn/2003c.html#72 OT: One for the historians - 360/91
http://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?
http://www.garlic.com/~lynn/2003f.html#39 1130 Games WAS Re: Any DEC 340 Display System Doco ?
http://www.garlic.com/~lynn/2003i.html#27 instant messaging
http://www.garlic.com/~lynn/2003m.html#14 Seven of Nine
http://www.garlic.com/~lynn/2003o.html#10 IS CP/M an OS?
http://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe
http://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
http://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
http://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004n.html#8 RISCs too close to hardware?
http://www.garlic.com/~lynn/2005e.html#64 Graphics on the IBM 2260?
http://www.garlic.com/~lynn/2005k.html#22 Where should the type information be?
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/97.html#2 IBM 1130 (was Re: IBM 7090--used for business or science?)

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Dec 2005 10:31:28 -0700
"Don Chiasson" writes:
In 1995, Brooks published a 20th anniversary edition of "The Mythical Man Month." Chapter 19, "The Mythical Man Month after 20 years" page 264 says, "Don't Build One to Throw One Away - The Waterfall Model is Wrong." His point is that he assumed development was a classical sequential model. In reality, software development is an iterative process.

Sometimes you may throw things away, but generally a project goes through stages of the process several times. In 1975, people believed the waterfall model was correct and was enshrined in Defemce department specifications. It doesn't work that way any more: iterative is better.


after 360 ... the early 70s was going to be FS ... which was radically different than 360.
http://www.garlic.com/~lynn/submain.html#futuresys

I didn't endure myself with those folks by 1) drawing analogy between the project and a cult film that had been playing continuously down in central sq. ... and 2) claiming that what i had running (for resource manager) was already better than the resource manager that they had defined for FS. I also may have cast aspersions on how many of them had experience building, deploying, supporting, and day-to-day operation of real-world system.

a corollary to the iterative process is that experience counts ... aka experience from prior projects but also experience from the iterations. of course this is also classical boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

and OODA-loops.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Fast action games on System/360+?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast action games on System/360+?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Dec 2005 10:36:03 -0700
hancock4 writes:
Part of the response time lag was communications--comm lines were slower and there were intermediate controller and multiplex boxes along the way. Most production environments had as many terminals hooked up as hardware would permit so there was contention from other transactions. Mainframes had little spare capacity since every increment in CPU speed or memory was expensive. I guess it wasn't until around 1985-1990 that mainframe hardware became cheap enough that loose extra capacity was affordable; before then one just couldn't afford to waste hardware costs.

Now mini computers and other mainframe brands--with a different architecture design--were another story and many could do fancy online things (that's why I limited this thread to S/360 archiecture).


how many remember the big ge/calma machines? a couple past refs:
http://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
http://www.garlic.com/~lynn/2005r.html#24 What ever happened to Tandem and NonStop OS ?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Reservation System Design

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Reservation System Design
Newsgroups: comp.databases.theory
Date: Wed, 07 Dec 2005 10:43:00 -0700
Gary writes:
I have posted a similar request on comp.databases.

I need some ideas on a mySQL database design for a reservation system, much like a doctor's office (hopefully without the long wait).

My client has 12 work areas. She wants her clients to be able to sign up for sessions at these areas on-line. The clients will sign up for either 1.5 or 2 hours, and she only wants 4 people to be able to start every half hour.


i have recollections of somebody trying to write a calender/scheduling application in the early 80s ... for the conference rooms in bldg. 28 (before research moved up the hill to the new almaden bldg). possibly somebody in the R-star group ... aka the original relational/sql system/r project http://www.garlic.com/~lynn/submain.html#systemr

had evolved into r-star by that time ... then i think star-burst followed r-star.

in any case, it wasn't pretty.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

PGP Lame question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PGP Lame question
Newsgroups: sci.crypt
Date: Wed, 07 Dec 2005 12:22:12 -0700
Unruh <unruh-spam@physics.ubc.ca> writes:
He has a signed confession anyway even if she signed afterwards. He gives the press a) the signed encrypted email. and b) the session key to unencrypt the message. This is the session key, which is a non-reused random number so does NOT reveal any of his own secrets. The press can then decrypt the email and since it is signed they have her "signed confession".

so there has been this thing for various payment protocols ... being able to sign anonymously to demonstrate that it was a valid coin ... but not tell who signed it.

this is also one of the issues in some EU standards. at one point EU made the statement (in the spirit of EU data privacy directive) that all point-of-sale, retail electronic payments should be as anonymous as cash ... but at the same time, dictating that all digitally signed transactions are required to have an appended x.509 identity certificate. in x9.59, it is possible to have a digitally signed transaction, a public key on file at the consumer's financial institution, and no requirement for appending x.509 identity certificates on every point-of-sale, retail transactions.
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

one could imagine that the conflicting eu direction of privacy everywhere and x.509 identity certificates for everything digitally signed would resolve it by forbidding retail point-of-sale transactions to be implemented with digital signatures (w/o having to qualify/modify any past directions).

in contrast, x9.59 standard just claims to be privacy agnostic ... the financial transaction is bound to an account ... leaving no direct identity information laying around at retail, point-of-sale. to the extent any identification may exist at all is whether or not anonymous accounts are allowed.

in some sense, public key authentication for domain names is what has broken the ssl domain name trust model. the enduser/client was suppose to provide the url/domainname ... and the server was suppose to authenticate with a digital signature and a valid digital certificate for that domain name. as URLs are pushed into the infrastructure ... away from direct enduser awareness, the attackers provide both the URL and the certificate ... and assert some binding to some construct the enduser does know about. the public key and certificate only provides the binding to the URL .... and nothing is left proving any binding between the URL and any external construct that the enduser is aware of (somehat analogous to proving that gold is valuable and then asserting what i'm giving you is gold ... and therefor what you are getting is valuable; w/o having to prove what you received was actually gold).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

PGP Lame question

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PGP Lame question
Newsgroups: sci.crypt
Date: Wed, 07 Dec 2005 13:21:09 -0700
Sebastian Gottschalk writes:
Well, wrong. That's what's the job of Root CAs is - to confidentionally declare that the certificate is bound to a certain institution. Assuming that the Root CA does proper verification, which is wrong for almost any.

ref:
http://www.garlic.com/~lynn/2005u.html#8 PGP Lame question
also
http://www.garlic.com/~lynn/aadsm21.htm#22 Borken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#23 Borken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#24 Borken SSL domain name trust model
http://www.garlic.com/~lynn/aadsm21.htm#25 Borken SSL domain name trust model
http://www.garlic.com/~lynn/2005u.html#0 PGP Lame question
http://www.garlic.com/~lynn/2005u.html#1 PGP Lame question
http://www.garlic.com/~lynn/2005u.html#2 PGP Lame question
http://www.garlic.com/~lynn/2005u.html#3 PGP Lame question

....

back when we were asked to work with this small, new client/server startup that wanted to do payments on their server ... they had this technology they were calling ssl ... the stuff has since come to be called e-commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

one of the things we had to do some detailed/business audit of some number of these new organizations calling themselves certification authorities ... that would be providing these things call domain name digital certificates. ... some number of past postings on ssl digital certificates, certification authorities ... and even some early postings referring to them as merchant "comfort" certificates (i.e. they provided a sense of comfort).
http://www.garlic.com/~lynn/subpubkey.html#sslcert

on oft repeated tale in numerous past postings ... including this one
http://www.garlic.com/~lynn/2005t.html#34 RSA SecurID product

basically somebody applies to a certification authority for a domain name certificate and provides some amount of identification information. the certification authority then goes to the domain name infrastructure authority and cross checks the supplied identification information with the identificiation information on file with the domain name infrastructure as to the owner of the domain name. if they match, then supposedly the ceritification authority issues a domain name digital certificate ... which represents that they have certified they have checked the supplied identificaiton information against the domain name owner identification information on file with the domain name infrastructure.

now there are some integirty issues with the domain name infrastructure ... some of which were the original motivation for having ssl domain name certificates. somewhat with the backing of the certification authority industry ... there are some proposals to improve the integrity of the domain name infrastructure ... especially with regard to who is the owner of a domain. one of the proposals is to have domain name owners register their public key at the domain name infrastructure ... and all future communication is then digital signed (and verified with the onfile public key).

this would eliminate some number of vulnerabilities in the domain name infrastructure, improving the integrity of the information that the certification authorities are certifying (aka the real trust root for the ssl domain name certificates ... aren't the certification authorities ... but the source of information that they are certifying).

it also provides an opportunity for the certification authorities to replayce a time-consumer, error-prone, and expensive identification process with a much simpler, more reliable and less expensive authentication process; they require all ssl domain name certificate requests to be digital signed, the certification authorities then can retrieve the on-file public key from the domain name infrastructure for verifying the public key (in lieu of retrieving the domain name owner identification information from the domain name infrastructure for matching with applicant's identification information).

however, there are a couple catch-22s for the certification authority industry. integrity issues with the domain name infrastructure were some of the original motivation for having ssl domain name infrastructure. improving the integrity of the domain name infrastructure eliminates those justifications for having ssl domain name certificates. the other issue is that if certification authorities can validate digital signatures with on-file public keys retrieved in realtime from the domain name infrastructure ... possibly everybody could do realtime retrieval of on-file public keys for digital signature verification ... eliminating the need for ssl domain name certificates all together.

in fact, you could imagine a super-light-weight transactin protocol. client does the normal domain name lookup (name-to-ipaddress) with the domain name infrastructure ... and what is returned is not only the ipaddress, but piggy-backed on the same response is any onfile public key (and possibly other onfile relavent information). the client then generates random session key, encrypts the transaction with the random session key, encodes the session key with the retrieved public key, and sends it off ... secure transactions handled in a single round trip.

one of the current big phishing vulnerabilities is the attadckers create a perfectly valid corporate entity and web site. they then apply and get a ssl domain name certificate for the web site. the website is then created to look like an exact clone of some other operation (known to victim clients). the attacker then sends the victims an organization name to click on ... that invokes there URL. the problem is there is no proof of the binding between the organization name and the URL supplied by the attacker. However, the attacker can supply a perfectly valid ssl domain name certificate that exactly corresponds to the URL that they supplied to the victim.

this could even be done as a man-in-the-middle attack
http://www.garlic.com/~lynn/subintegrity.html#mitm

where the attacker actually has very little on their own server, except some portal code that provides a go-between, snarfing data as it impersonates both the real server and the real client.

the issue is that what is being certified is technical nuts and bolts in the guts of the web infrastructure .... something that various useability enhancements are attempting to hide completely from endusers ... that was part of what motivating my early postings on merchant comfort certificates ... what is meaningful to endusers and what is meaningful to technical gurus are not the same. the technical gurus are certifying stuff that has less and less meaning to endusers (they don't care that they are really talking to the real owner of the domain name listed in a URL ... that they never pay any attention to).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

simple-minded historical "mips"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: simple-minded historical "mips"
Newsgroups: alt.folklore.computers
Date: Thu, 08 Dec 2005 09:43:25 -0700
paul c writes:
Not according to the mod 30 func char that I'm looking at, but that's what he said and I remember confirming it somewhere, maybe it was another model (at the moment, I can't seem to see the mod 40 manual because of some strange behaviour of the Adobe plugin that I don't understand).

360/65 had double-word fetch in 750ns ... instruction timings prorated instruction fetch as portion of the double-word 750ns. so bctr 2-byte instruction had 1/4th of 750ns added to the processing. bct 4-byte instruction had 1/2 of 750ns added to the processing.

many of the slower 360s had 2byte fetch in a slower memory cycle i.e. bctr would have one memory cycle fetch for the 2-byte instruction, while bct would have two memory cycle fetches for the 4-byte instruction (over and above the basic instruction processing time).

bct is rs instruction ... it has to take the address register and do an add of the 12-bit displacement to compute the target address (in addition to actually doing the register decrement and branch). bctr just has to do the register decrement and branch to the target address in the register (doesn't have to do the register+displacement add to obtain the target address).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

simple-minded historical "mips"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: simple-minded historical "mips"
Newsgroups: alt.folklore.computers
Date: Thu, 08 Dec 2005 09:51:40 -0700
Anne & Lynn Wheeler writes:
360/65 had double-word fetch in 750ns ... instruction timings prorated instruction fetch as portion of the double-word 750ns. so bctr 2-byte instruction had 1/4th of 750ns added to the processing. bct 4-byte instruction had 1/2 of 750ns added to the processing.

if you aligned bctr & bct on double-word and the instruction stream didn't nominally use the rest of the double word (except in the fall-thru case) ... then the instruction fetch of bct & bctr in actual measurements would be identical ... same would be true of 4-byte fetch machines.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

3vl 2vl and NULL

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3vl 2vl and  NULL
Newsgroups: comp.databases.theory
Date: Thu, 08 Dec 2005 10:45:48 -0700
"David Cressey" writes:
I've stayed away from the 3vl versus 2vl discussion over in the monster NULL thread. Mainly, I'm not sure exactly what Codd and Date have to say on the subject. I think that both of them are smarter than I am, and that they don't agree on the subject, and that they agree that they disagree. That pretty much leaves it up to us, doesn't it?

posts early in the null thread
http://www.garlic.com/~lynn/2005t.html#20 So what's null then if it's not nothing?
http://www.garlic.com/~lynn/2005t.html#23 So what's null then if it's not nothing?

referring to a post in an earlier thread on same subject
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?

quoting a Date artcle from 1992: An Explanation of why three-valued logic is a mistake (Why Accept Wrong Answers?)

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Thu, 08 Dec 2005 17:13:14 -0700
Scott A Crosby writes:
Both schemes are not perfect, but I think that chip&pin is probably more secure than paying for small items with a personal check, which can be stolen as you point out. However, it seems that people are treating chip&pin as if it is infallible, just as they once treated ATM's as infallible. Implementation and administrative mistakes will be made (see 'When Cryptosystems Fail') and I'd hate to be one of the people who experiences such a problem, if everyone is claiming that the computer system cannot make a mistake.

reference to slightly related aspect in a different thread:
http://www.garlic.com/~lynn/aadsm21.htm#27 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#28 X.509 / PKI, PGP, and IBE Secure Email Technologies

slightly ...

An Invitation to Steal; The more you automate your critical business processes, the more vigilant you need to be about protecting against fraud
http://www.cio.com.au/index.php/id;1031341633;fp;4;fpid;18

from 3-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are

the cards have been something you have authentication and presumably unique. PINs have been considered "somthing you know" authentication and in conjunction with cards are countermeasure to lost/stolen cards. furthermore, multi-factor authentication typically are considered to have higher integirty because they presumably are subject to different threats/vulnerabilities.

one of the issues with old-fashion magstripe atm cards ... is that they have been advances in technology where skimming attacks record both the contents of the magstripe and the pin (common vulnerability) and relatively straight-forward produce counterfeit cards where the pin is also known.

supposedly the beneift of chip&pin is that it is harder to skim & counterfeit chips ... than it is to skim and counterfeit magstripes.

some of the early chip&pin were designed to be countermeasure against lost/stolen magstripe cards ... where the chip card authentication material was significantly more secure than authentication material on magstripes. however, they had overlooked skimming threats/attacks and turned out to be as vulnerable to skimming as magstripe cards were.

some of these compromises made it into the european press as yes cards ... i.e. ccounterfeit chip was loaded with skimmed information and the counterfeit chip claimed a) any entered PIN was correct, b) all transactions were to be offline, c) all transaction values were within account limit (yes card).

there was then some evoluation to make chip&pin infrastructure more resistant to skimming vulnerabilities (simple skimmed information couldn't be loaded into a counterfeit chip and accepted as valid).

misc. past mention of yes cards:
http://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
http://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
http://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
http://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
http://www.garlic.com/~lynn/aadsm18.htm#20 RPOW - Reusable Proofs of Work
http://www.garlic.com/~lynn/2003o.html#37 Security of Oyster Cards
http://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
http://www.garlic.com/~lynn/2004j.html#12 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
http://www.garlic.com/~lynn/2004j.html#13 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
http://www.garlic.com/~lynn/2004j.html#14 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
http://www.garlic.com/~lynn/2004j.html#35 A quote from Crypto-Gram
http://www.garlic.com/~lynn/2004j.html#39 Methods of payment
http://www.garlic.com/~lynn/2004j.html#44 Methods of payment

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Thu, 08 Dec 2005 19:25:30 -0700
"Andy Freeman" writes:
My bank isn't imposing the fee. The fee that Handley is referring to is imposed by the merchant. Yes, they add it to the order cost, so unless the bank provides a rebate, there's a fee for using an ATM card.

bank charge all sorts of fees for just about everything they do. for consumers some of the fees are effectively waived ... and the banks have to cover the cost of that business some other way. they are less kind to businesses, businesses get charged for pin-debit transactions, signature-debit transactions (which are a lot more like credit), credit transactions (these fees typically show up as "merchant discount fee" ... the difference between the amount against the consumers account and the amount that the financial institution actually credit to the merchant), and even handling cash. someplace i ran across that fast-food resturants have something like 7percent shrinkage on cash between the till and what they net (employee pilferage, time/cost processing cash, the fees bank charge for handling cash, etc).

i don't know the answer ... but i've wondered whether merchants actually make out on cash-back pin-debit transactions (the pin-debit fee they are charged is less than what they would be charge for depositing the money in the bank).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Fast action games on System/360+?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast action games on System/360+?
Newsgroups: alt.folklore.computers
Date: Fri, 09 Dec 2005 10:02:53 -0700
Steve O'Hara-Smith writes:
I first played ADVENYURE on the 370 at Cambridge - I believe it was the real thing not a clone.

misc. past adventure posts
http://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
http://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/99.html#83 "Adventure" (early '80s) who wrote it?
http://www.garlic.com/~lynn/99.html#84 "Adventure" (early '80s) who wrote it?
http://www.garlic.com/~lynn/99.html#169 Crowther (pre-Woods) "Colossal Cave"
http://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
http://www.garlic.com/~lynn/2000d.html#33 Adventure Games (Was: Navy orders supercomputer)
http://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
http://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
http://www.garlic.com/~lynn/2001n.html#0 TSS/360
http://www.garlic.com/~lynn/2002d.html#12 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002e.html#43 Hardest Mistake in Comp Arch to Fix
http://www.garlic.com/~lynn/2002m.html#57 The next big things that weren't
http://www.garlic.com/~lynn/2003f.html#46 Any DEC 340 Display System Doco ?
http://www.garlic.com/~lynn/2003i.html#66 TGV in the USA?
http://www.garlic.com/~lynn/2003i.html#69 IBM system 370
http://www.garlic.com/~lynn/2003l.html#40 The real history of computer architecture: the short form
http://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe
http://www.garlic.com/~lynn/2004f.html#57 Text Adventures (which computer was first?)
http://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
http://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
http://www.garlic.com/~lynn/2004g.html#49 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#1 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#4 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004k.html#38 Adventure
http://www.garlic.com/~lynn/2005c.html#45 History of performance counters
http://www.garlic.com/~lynn/2005h.html#38 Systems Programming for 8 Year-olds
http://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP
http://www.garlic.com/~lynn/2005k.html#41 Title screen for HLA Adventure? Need help designing one
http://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Fri, 09 Dec 2005 11:04:43 -0700
"Dean Kent" writes:
If VISA or MC are involved, there is a percentage from my experience. It sounds like the debit system in Norway is not 'controlled' by MC or VISA, but the banks themselves. This sounds like the difference that Maynard Handley was talking about with ARCO (gas station) which charges a transaction fee of 75 cents (bank ATM transaction) vs. using it at a merchant that accepts debit cards (VISA/MC transaction).

one of the few pages that discuss the merchant (credit card) charges/discounts for various components:
http://www.infomerchant.net/merchantaccounts/comparison.html

discussion of the class-action suit mentioned in the mentioned federal reserve report:
http://payingwithplastic.org/printVersion.cfm?aid=690

Board of Governors of the Federal Reserve System; Report to the Congress on the Disclosure of Point-of-Sale Debit Fees
http://www.federalreserve.gov/boarddocs/rptcongress/posdebit2004.pdf

discusses both consumer debit fees and merchant debit fees. from above:
Fees for Accepting Debit and Credit Transactions

The monetary cost of accepting debit or credit transactions is called the merchant discount, which is the difference between the face value of the retail transaction and the amount the merchant acquirer transfers back to the merchant after settling the debit or credit transaction. The exact amount of the merchant discount varies by firm and is generally considered proprietary information. Merchants may also pay their acquirers periodic contracting fees as well as the cost of installing and maintaining terminals.

The bulk of the merchant discount is paid to the card-issuing institution in the form of the interchange fee, the amount the merchant acquirer must pay the card-issuing depository institution for each debit transaction. Although the interchange fee is paid to the depository institution, it is set by the EFT network. These fees apply not only to debit transactions but also to credit transactions. Thus, Visa and MasterCard (or their respective PIN debit networks) set interchange fees for their credit card, signature debit, and PIN debit operations.

The pricing structures typically set for PIN interchange differ from those set for signature debit and credit card interchange. PIN interchange fees are either a fixed amount or a percentage of the transaction, capped at a fixed value. In contrast, fees for signature debit and credit card interchange are calculated as a percentage of the total amount of the sale, without a cap. All three types of interchange fees may vary depending on the type and size of the merchant; for example, the networks may offer different interchange fee schedules to major supermarket chains, gasoline retailers, and discount retailers.

Since 2001, interchange fees have varied markedly (figure 3). The figure shows the historical trends in average interchange fees, by payment type, for a $40 purchase conducted at a typical merchant. The interchange fees set by Visa and MasterCard for signature debit have been substantially higher than those set by the regional POS networks for PIN debit. The difference between PIN interchange and signature interchange has narrowed somewhat in recent years, as the regional POS networks have raised their PIN interchange fees. Moreover, signature fees were lowered after the 2003 settlement of the class-action suit led by Wal-Mart against Visa and MasterCard. As of August 1, 2003, both Visa and MasterCard cut their signature-based rates by one-third. These prices were valid through early 2004, when both Visa and MasterCard increased their rates slightly. As of January 1, 2004, the honor-all-cards rule was fully abolished.

As discussed earlier, most current interchange fees vary by purchase amount (figure 4). The figure shows average per-transaction interchange fees charged to the same typical.


--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Sat, 10 Dec 2005 08:27:23 -0700
"Rupert Pigott" writes:
I don't stick to or advocate any particular development model. I've used quite a few, I choose or improvise a model that fits the task and working environment at hand. Forcing square pegs into round holes costs time and tears. Hell, I have even applied formal methods to what folks like to call RAD before. ;)

one of my favorite comments on standards
http://www.software.org/quagmire/

another standard
http://standards.ieee.org/reading/ieee/std_public/description/se/1471-2000_desc.html

comments on various stuff ... including the above
http://www.sei.cmu.edu/publications/documents/03.reports/03tn006/03tn006.html

misc. past posts referencing quagmire page:
http://www.garlic.com/~lynn/2001i.html#55 Computer security: The Future
http://www.garlic.com/~lynn/2002e.html#59 Computers in Science Fiction
http://www.garlic.com/~lynn/2002e.html#69 Computers in Science Fiction
http://www.garlic.com/~lynn/2002e.html#70 Computers in Science Fiction
http://www.garlic.com/~lynn/2003k.html#16 Dealing with complexity
http://www.garlic.com/~lynn/2004q.html#1 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2004q.html#46 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005d.html#52 Thou shalt have no other gods before the ANSI C standard

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

XBOX 360

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XBOX 360
Newsgroups: alt.folklore.computers
Date: Sat, 10 Dec 2005 08:47:04 -0700
Morten Reistad writes:
Actually, pushing an ISO standard is not so hard if you are well-connected.

You need a sponsoring body that actually writes the text. This can be the standards organisation of Iceland or Malta. They need to attract 4 co-sponsors to back the motion. They only need to sign up for it, and foot the bill for printing and mailing the stuff to all the corners of the world. Not a staggering amount, less than it costs to print and mail a daily run of a community newspaper.

There are periods of solicitation etc. where the members (like ANSI and DIN) get their say. You are free to accept and incorporate their comments. They cannot Just Say No, they will have to state a well founded criticism that you CAN incorporate.


lots of my off repeated comments about ISO requiring that any networking standards work at ISO level (and/or ISO chartered national body) conform to OSI model ...
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

and x3s3.3 (iso chartered ansi national body for networking) couldn't work on HSP (high-speed protocol) because

1) hsp went from transport/level4 to mac/lan interface ... skipping level3/networking interface, violating osi model
2) hsp supported internetworking protocol (IP), which doesn't exist in osi, violating osi model
3) hsp went directly to lan/mac interface, lan/mac interface sits somewhere in the middle of level3, lan/mac interface violates osi, support lan/mac then also violates osi.

i was one of the co-authors of x9.99, national financial industry privacy standard ...
http://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.99%3a2004

in X9, national financial standards group
http://www.x9.org/

a lot of work went into the standard to try and make it acceptable for other national financial standards organizations. however, it is still going to take a bit of work on x9.99 at the tc68 iso/international level

http://isotc.iso.org/livelink/livelink?func=ll&objId=2861&objAction=browse&sort=name
http://www.iso.ch/iso/en/stdsdevelopment/tclist/TechnicalCommitteeDetailPage.TechnicalCommitteeDetail?TC=68

in x9.99 case, it has passed as ansi/x9 standard ... and then has x9 and four other countries sponsor it as the basis for a tc68/iso work item.

another item spent a lot of time on was x9.59
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959
http://web.archive.org/web/20011215145141/http://webstore.ansi.org/ansidocstore/product.asp?sku=DSTU+X9.59-2000

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Identity and Access Management (IAM)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Identity and Access Management (IAM)
Newsgroups: alt.computer.security
Date: Sat, 10 Dec 2005 13:38:11 -0700
"Edward A. Feustel" writes:
Take a look at Sun's Open Source XACML on Sourceforge. In conjunction with Public Key Infrastructure it can do the job nicely. See also Signet, a project of Internet2. Regards, Ed

one of the issues is PKIs have frequently confused identification and authentication. one of the issues was early 90s with work on pki x.509 identity digital certificates possibly becoming grossly overloaded with personal information.

later in the mid-90s there were things called relying-party-only certificates that were invented because of the privacy and liability concerns regarding identity certificates carrying personal information
http://www.garlic.com/~lynn/subpubkey.html#rpo

the issue with relying-party-only certificates is that it is trivial to demonstrate that they are redundant and superfluous ... aka if all the necessary information is really on file and has to be referenced for authentication operations ... then the digital certificates can be eliminated totally and everything retrieved from the online file.

there is aslo the original pk-init draft for kerberos
http://www.garlic.com/~lynn/subpubkey.html#kerberos

registering a public key in lieu of password and doing digital signature verification instead of password matching. later the pk-init draft had the pki-based stuff added. periodically i get email from the person claiming responsibility for having pki-based stuff added to the pk-init draft, apologizing.

recent discussion in crypto mailing list regarding applicability of pki to email authentication.
http://www.garlic.com/~lynn/aadsm21.htm#26 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#27 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#28 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#29 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#30 X.509 / PKI, PGP, and IBE Secure Email Technologies
http://www.garlic.com/~lynn/aadsm21.htm#31 X.509 / PKI, PGP, and IBE Secure Email Technologies

part of this is that operational pki identity business processes were original targeted at first-time communication between complete strangers ... where the respectively parties had no (other) means of directly accessing information about the other party (the letters of credit/introduction from the sailing ship days). if you apply that to say kerberos operation (allowing somebody to connect to your system) ... the implication is that everybody that can present a valid pki x.509 identity digital certificate would be allowed access to your system ... there wouldn't need to be any predefined vetting or userid definition.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Sat, 10 Dec 2005 16:00:33 -0700
Tony Hill writes:
I know that in Canada you can often rent a car with just a debit card, though they'll typically actually take out a fairly large chunk of cash as a deposit, while with a credit card they'll just take an imprint and verify that you're limit will accommodate sufficient amounts. Another advantage for credit cards in this situation is that many (most?) offer free car insurance on rentals when you pay using the card. Given that typical call rental insurance works out to ~$8,000/year (if you were to rent for a full year), this can add up quickly if you rent cars with any regularity.

frequently travel industry will do an initial auth against your credit card account, which reduces your open-to-buy (head room you have on your credit limit), but not actually do settlement on the charges until done (checkout, return the car, etc). one of the most common/simplest scenarios of this is on resturant bills ... they do the auth before you actually sign and potentially add in any tip ... since the auth is w/o tip ... there are all sorts of rules for having settlement for more than what the initial auth was for.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

3390-81

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3390-81
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 11 Dec 2005 10:55:17 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
BTW, IBM rejected a requirement for FBA support decades ago. Maybe it's time to resubmit.

big part of the issue was alternative implementations for multi-track search used in vtocs and pds directories. misc. multi-track search stories
http://www.garlic.com/~lynn/submain.html#dasd

i was told that even when i provided them fully integrated and tested fba-support for mvs ... it would still cost $26m to ship the code (documentation training, ????, etc). part of the issue was that at the time, they thot they were selling disk as fast as they could. it was difficult to demonstrate incremental revenue for fba-support ... since it appeared to just move ckd revenue to fba revenue. the other arguments used at the time have only become more apparent with the passing of years.

fba support was relatively simple for vm/cms ... since all of vm's kernel disk access & paging stuff ... and all the cms filesystem stuff ... had essentially been logical fba since their origin in the mid-60s.

misc. past postings about the $26m
http://www.garlic.com/~lynn/97.html#16 Why Mainframes?
http://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
http://www.garlic.com/~lynn/2000.html#86 Ux's good points.
http://www.garlic.com/~lynn/2000f.html#18 OT?
http://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
http://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
http://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
http://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
http://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
http://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Channel Distances

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 11 Dec 2005 16:07:31 -0700
ronhawkins@ibm-main.lst (Ron and Jenny Hawkins) writes:
Native can have a lot of meanings depending on whether you are using SM between ESCD.

If I recall correctly Hiperchannel was an emulated channel, and not a channel extender.


HYPERChannel was a 50mbit/sec LAN with lots of various kinds of adapters. The adapters allowed for connection of up to four (possibly parallel) 50mbit/sec LAN interfaces.

It was by NSC founded by Thornton (cray and thornton did cdc6600, thornton left to do NSC, cray left to do cray research). there were adapters for a lot of different kinds of processors. there was also a A51x adapter than emulated ibm channel interface and allowed connection of ibm mainframe controllers.

there were a couple locations that built very early (original) nas/san type of implementations using ibm mainframe as storage controller ... and some even supported hierarchical filesystem ... where the ibm mainframe supported staging from tape to disk ... and then passed back information to possibly a cray machine ... that then used hyperchannel connectivity to pull data directly off ibm mainframe disk (mainframe handled control and setup prebuilt CCWs in the memory of the A51x adapter ... self-modifying CCWs weren't supported, other processors then could get authorization for using specific prebuilt CCWs).

hyperchannel also had T1 (and later T3) telco lan extender boxes. when used in connection with A51x adapters for accessing ibm mainframe controllers, it could effectively be used as a form of channel extension.

i did project for santa tersa lab when there were moving something like 300 IMS developers to an offsite location. they considered the idea of remote 3270 support hideous compared to what they had been used to with local channel-attached 3270 vm/cms support. HYPERChannel configuration was created using high-speed emulated telco between STL/bldg.90 and the off-site bldg.96/97/98 complex

There was already a T3 collins digital radio between stl/bldg.90 and the roof of bldg.12 on the main san jose plant site. the roof of bldg.12 had line-of-site to the roof of the off-site bldg.96/97/98 complex. created T1 subchannel on the bldg.90/12 microwave link ... and then put in dedicate microwave link between bldg. 12 & 96 ... with a patch thru in bldg. 12.

the relocated ims developers then had local "channel-attached" 3270s at the remote site ... using the HYPERChannel channel extension .. with apparent local 3270 response. there was an unanticipated side-effect of replacing the 3274 controllers that directly attached to the ibm channel with HYPERChannel A220 adapters. It had been a fully configured 168-3 with full set of 16 channels. There were a mixture of 3830 and 3274 controllers spread across all the channels. It turned out that the A220 adapters had significantly lower channel busy time/overhead doing the same operations that had been performed by the "direct channel-attached" 3274 controllers. Replacing the 3274 controllers with A220 adapters and remoting the 3274 controllers behind the A220/A51x combination ... reduced channel busy overhead (for 3270 terminal i/o) and resulted in an overall system thruput increase of 10-15 percent.

misc. past hyperchannel and/or hsdt postings
http://www.garlic.com/~lynn/subnetwork.html#hsdt

the configuration was later replicated for a similar relocation of a couple hundred people in boulder to an adjacent building located across a hiway. for this installation, T1 infrared optical modems were used mounted on the roofs of the two bldgs.

there is a funny story several years later involving 3090s. i had chosen to reflect *channel check* if I had an unrecoverable T1 error, where the operating system then recorded the error and invoked various kinds of higher level recovery. this was perpetuated into a number of later HYPERChannel driver implementations. when 3090s first shipped, they expected to see something like a total 3-5 *channel checks* aggregate across all machines over the first year. there was something closer to 20 *channel checks* that were recorded. investigation eventually narrowed it to HYPERchannel driver. I got contacted and after some amount of research ... it turned out that reflecting IFCC (*interface control check*) resulted in effectively the same sequence of recovery operations. a few retellings of the 3090 cc/ifcc hyperchannel story:
http://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
http://www.garlic.com/~lynn/2005e.html#13 Device and channel

some past posts specifically mentioning thornton:
http://www.garlic.com/~lynn/2002i.html#13 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2005k.html#15 3705
http://www.garlic.com/~lynn/2005m.html#49 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a parallel x86 design

a few past posts mentioning the boulder installation and infrared modems
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/99.html#137 Mainframe emulation
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001e.html#72 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2005e.html#21 He Who Thought He Knew Something About DASD

a few past posts mention the san jose plant site collins digital radio
http://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2002q.html#45 ibm time machine in new york times?
http://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2005n.html#17 Communications Computers - Data communications over telegraph

quite a few past posts discussing 327x and response ... which was a really hot topic in the period ... especially after the introduction of 3278/3274 combination. the problem was that good vm/cms terminal response was on the order of the 3272 hardware latency. the 3274 hardware latency was easily 3-4 times that of the 3272 and was noticeable to the internal vm/cms users that had gotten use to good human factors interactive response. one the other hand, the normal mvs/tso response was so bad that those users didn't notice the difference between a 3272 direct channel attached controller and a 3274 direct channel attached controller.
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/96.html#14 mainframe tcp/ip
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#66 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol
http://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#48 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
http://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power
http://www.garlic.com/~lynn/2002q.html#51 windows office xp
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
http://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
http://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
http://www.garlic.com/~lynn/2003c.html#72 OT: One for the historians - 360/91
http://www.garlic.com/~lynn/2003d.html#23 CPU Impact of degraded I/O
http://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage Metrics
http://www.garlic.com/~lynn/2003k.html#20 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003m.html#19 Throughput vs. response time
http://www.garlic.com/~lynn/2004c.html#30 Moribund TSO/E
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2004e.html#0 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005e.html#13 Device and channel
http://www.garlic.com/~lynn/2005h.html#41 Systems Programming for 8 Year-olds
http://www.garlic.com/~lynn/2005r.html#1 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#20 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a parallel x86 design

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Channel Distances

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: bit.listserv.ibm-main
Date: Sun, 11 Dec 2005 16:28:14 -0700
R.Skorupka@ibm-main.lst (R.S.) writes:
ESCON: MM - 3km SM - 20km Typical value in sales leaflets is 43km which means XDF feature (means Single Mode) in CPC - 20km - Escon Director - 20km -Escon Director - 3km. The last distance had to be 3km (MM), because there were no devices with

ref:
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances

HYPERChannel got around the CCW-to-CCW latency issues by pre-loading the channel program into the memory of the local A51x channel adapter (box that emulated ibm mainframe channel and to which ibm mainframe controllers were attached). This also imposed a restriction that you couldn't do self-modifying ccw sequences ... and had a limit on the number of CCWs in a channel program.

there was also an expanded A51x adapter, the A515 with additional memory and processing for DASD, ckd CCWS ... where the search argument was also preloaded into the memory of the A515 along with the dasd channel program (overcoming latency issues with search argument access). this, then imposed restriction on ccw programs that modified seek/search arguments.

part of the issue with NSC's HYPERchannel product was that there had been this fiber-optic technology knocking around POK for a number of years that there were hoping to eventually get out (eventually released as something called ESCON) ... and the high-speed interconnect people in POK viewed HYPERchannel as something of competition. The other problem that the POK fiber optic people had was the whole SNA crowd.

My wife had been con'ed into going to POK to be in charge of loosely coupled architecture ... where she had done peer-couple shared data ... that except for IMS hot-standby didn't see much uptake until parallel sysplex. in any case, she was constantly fighting the SNA group and thet all processor to processor interaction had to be SNA. there was something of a truce called where anything within the walls of the datacenter wasn't absolutely mandated to be SNA ... but anything the crossed the glasshouse wall boundary had to be SNA. this compromised then also created various difficulties for the people trying to get out high-speed fiber interconnect.

various posts related to my wife's stint in pok in charge of loosely-coupled architecture ... and/or any of her activities in peer-coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata

slightly aggravating the situation, prior to her going to pok, she had co-authored a peer-coupled network architecture published internally as AWP39, that was viewed as a alternative to sna.

misc. past posts mentiong AWP39:
http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Sun, 11 Dec 2005 19:24:32 -0700
Keith writes:
In reality, they offer the same protections. The difference is that a credit card it's your credit limit at stake, rather than your bank balance (debit card). The bank owns the problem either way, and has to prove you made the purchase. The practicalities are big though. I'd rather not have my bank account drawn to zero. My credit limit is another thing, and easily challenged.

here is a description of the differences:
http://www.pirg.org/consumer/banks/debit/debitcards1.htm

the above description includes a reference to the federal reserve discussion on the subject:
http://www.federalreserve.gov/pubs/consumerhdbk/electronic.htm

here is recent article discussing some of the issues
http://www.usatoday.com/money/perfi/columnist/block/2005-05-09-debit-cards_x.htm

other postings in this particular thread drift::
http://www.garlic.com/~lynn/2005u.html#13 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2005u.html#14 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2005u.html#16 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2005u.html#20 AMD to leave x86 behind?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Fast action games on System/360+?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast action games on System/360+?
Newsgroups: alt.folklore.computers
Date: Mon, 12 Dec 2005 17:51:26 -0700
"Micheal H. McCabe" writes:
Unknown. If it was your port, THANK YOU! The UNIVAC was located at Edinboro State College. Operating system was VS/9 running TSOS. Source language was FORTRAN IV (BIGFOR Compiler.) We acquired the Adventure game in around 1978 (one of those tapes must have just mounted itself...)

shortly after adventure first appeared, tymshare (up in cupertino) got it running on vm/cms. i tried to get a transfer from cupertino to san jose ... or at least have it appended to the end of the vmshare archive tape that they sent me once a month (i would turn around and redistributed vmshare archive at numerous places around the internal network). however, before I actually got it by that (direct) path ... somebody at peterlee got thru a round-about path from tymshare, carried it over to a machine on the internal network and sent it to me over the internal network. misc. posts regarding internal network (which was larger than arpanet/internet from just about the beginning until sometime mid. 85):
http://www.garlic.com/~lynn/subnetwork.html#internalnet

previous post in this adventure subthread
http://www.garlic.com/~lynn/2005u.html#15 Fast action games on System/360+>

adventure history/timeline
http://www.rickadams.org/adventure/a_history.html

the game seemed to have bled over from stanford pdp10 to a tymshare pdp10 and then found its way to tymshare vm/cms ... and i got a copy thru a very circuitous path involving peterlee in the uk.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

RSA SecurID product

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RSA SecurID product
Newsgroups: bit.listserv.vmesa-l
Date: Tue, 13 Dec 2005 08:51:05 -0700
"Vin McLellan" writes:
The Wheelers' comments -- a response to my claim that RSA's fabled crypto savvy influenced the design of the SecurID OTP token -- were inaccurate and misleading.

I've been using the Wheeler's posts to educate and frighten my clients for years, so I'm sure the error is unintentional, perhaps even caused by some lack of clarity in my earlier post. Since the Wheelers' industrial history archive is likely to be referred to often in the years to come, however, I did not want even this brief aside to stand uncorrected.

It should surprise no one, certainly not Lynn Wheeler, that after the 1996 merger of SDI and RSADSI, RSA's new team of cryptographers and crypto engineers began to urge a completely redesign of the original SecurID system around stronger (128-bit) cryptosystems.

(The RSADSI folks were not unfamiliar with the SecurID. The SecurID client/server protocol had been designed around a proprietary hash developed for SDI by Ron Rivest, the "R" of RSA. After the merger, RSA's crypto engineers pushed to harden the SecurID system -- which already had a huge installed base, and dominated its market niche -- against new threats from the ubiquitous network and the Web. I was then a consultant to SDI, as I am now to RSA -- a bit of the proverbial fly on the wall.


this is along the lines of the old silicon valley joke about there actually only being 200 people in the industry ... it looked like more, because the same people just kept moving around.

i was mainly referring to RSA public key cryptography ... which most people identify RSA with ... differentiated from non-RSA public key infrastructure used by SecureID.

in that time-frame, there were also a lot of stuff going on around RSA public key cryptography and the use of RSA digital signature in hardware tokens for authentication ... types of stuff you find in SSL, the various PKCS standards ... including work on PKCS#11 that RSA, Netscape, etc was pushing for RSA (public key digital signature) authentication smartcards ... as opposed to secureid cards (there were some PKCS#11 meetings with the various participants at the toll house in los gatos).

also, much of the differential power analysis is heavily oriented towards RSA public key smartcards (attacks on rsa private key)

one of the issues in heavy push for RSA public key digital signatures for smartcard authentication ... was that the alternative was FIPS186 digital signature which required a reliably random number be generated as part of the digital signature calculation. most of the hardware token chips from the period not only had issues with differential power analysis (attacker being able to obtain lots of information about RSA private key during RSA digital signature and/or encryption operation) but had dismal random number capability. One extensive study of most chips from the period ... involving power-off/power-on generate random number ... repeated 65k times, found nearly all chips having something like 30 percent of the time, there was a repeated random number.

i've frequently contended one of the reasonse there was extensive standards work on RSA digital signature authentication during the period ... was because the alternative gov. standard FIPS-186 digital signature (for strong authentication) was heavily dependent on random number generation during the digital signature calculation process ... that made it nearly useless in hardware tokens of the period (aka poor random number generation could reveal more about FIPS-186 private key than differential power analysis was revealing about RSA private key).

However, for that generation of RSA public key tokens, the common practice was to pre-personalize the tokens by using an external box ... with a reliable random number generation source ... to generate the RSA public/private key pair and inject the RSA private key into the token (since reliable random number generator is also required during the key generation process).

In the late 90s, you started to see a combination of things ... FIPS186-2 was upgraded to included ECDSA ... which has radically different private key vulnerability to differential power analysis, (especially compared to RSA), chips that had acceptable hardware random number characteristics (eliminating private key exposure vulnerability because of poor random number generation), and sophisticated power use management that included injected random power cycles(as additional countermeausre to differential power analysis).

I joke during the period that i could take a $500 milspec part, cost reduce it by two orders of magnitude and at the same time, make it more secure. Some of the smartcard endevors got into this vicious negative feedback loop, the cards were more expensive than a single purpose could justify, so add more features, which increased the cost, which required adding more features to justify the cost, which increased the cost. an alternative approach was to aggresively cost reduce all the components ... throwing out everything not required for authentication. part of the issue around extraneous features is that they can contribute to insecurity.

the advent of chips with higher quality random number generation also it made it feasable to start during key generation on the chip (eliminating the pre-personalizing step where keys are generated by external boxes and injected into the chip).

however, there is frequently still a strong tendency to get into the negative feedback cycle of increasing the features to justify the cost, which increases the cost (and complexity) , which requires even more features to justify the cost.

one of the downside of this other approach to authentication hardware tokens is that they require an explicit hardware interface between the environment and the hardware token. securid had a lower per seat entry since existing keyboard and screen could be utilized. that may be in the process of changing with the advent of wireless technologies that can be used for a variety of different purposes ... including wireless authentication token.

the upside of the digital signature tokens is that they require a much simpler backend infrastructure operation (once the downside of the interfacing has been overcome). it is possible to take the single/same token and register the token's public key in a large variety of different domains and environments ... resulting in a single token authenticator ... this is the person-centric approach to authentication contrasted with the institutional-centric approach that effectively requires a unique token for every different security domain. I once went around to most of the booths at a smartcard conference ... joking that if the current (institutional centric) hardware token paradigm ever took off ... people would be replacing having to keep track of hundreds of passwords with having to keep track of hundreds of hardware tokens.

random past posts mentiong the person/institution centric authentication subject
http://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
http://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
http://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
http://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
http://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
http://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
http://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
http://www.garlic.com/~lynn/2005m.html#37 public key authentication
http://www.garlic.com/~lynn/2005p.html#6 Innovative password security
http://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
http://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

RSA SecurID product

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RSA SecurID product
Newsgroups: bit.listserv.vmesa-l
Date: Tue, 13 Dec 2005 09:02:29 -0700
ref:
http://www.garlic.com/~lynn/2005u.html#26 RSA SecurID product

oh, and slightly related news item to fips186-2 and ecdsa

NSA posts notice about faster, lighter crypto
http://www.fcw.com/article91669-12-09-05-Web

and some additional URLs from the above

The Case for Elliptic Curve Cryptography
http://www.nsa.gov/ia/industry/crypto_elliptic_curve.cfm
Certicom speeds digital signature verification
http://www.fcw.com/art1917icle90245-08-22-05-Web
Fact Sheet NSA Suite B Cryptography
http://www.nsa.gov/ia/industry/crypto_suite_b.cfm

from nsa suite B (originally announced at 2005 RSA Conference) site:
SUITE B includes:

Encryption: Advanced Encryption Standard (AES) - FIPS 197 (with keys sizes of 128 and 256 bits)
http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf*

Digital Signature: Elliptic Curve Digital Signature Algorithm - FIPS 186-2 (using the curves with 256 and 384-bit prime moduli)
http://csrc.nist.gov/publications/fips/fips186-2/fips186-2-change1.pdf

Key Exchange: Elliptic Curve Diffie-Hellman or Elliptic Curve MQV Draft NIST Special Publication 800-56 (using the curves with 256 and 384-bit prime moduli)
http://csrc.nist.gov/CryptoToolkit/kms/keyschemes-Jan03.pdf

Hashing: Secure Hash Algorithm - FIPS 180-2 (using SHA-256 and SHA-384)
http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf*


--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Fast action games on System/360+?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast action games on System/360+?
Newsgroups: alt.folklore.computers
Date: Tue, 13 Dec 2005 21:15:34 -0700
"Micheal H. McCabe" writes:
Administration didn't like the idea of using their mainframe for games. It wasn't until they realized that having fifty or sixty spurious copies floating around in userland used a lot more disk space than a single sactioned copy in a public directory.

there is some folklore that at one point tymshare executives heard that there were games (adventure, etc) available on their (vm/cms) time-sharing service
http://www.garlic.com/~lynn/submain.html#timeshare

and decided that they all should be removed ... a decision which was quickly reversed when they were told that game playing had grown to 30% of tymshare revenue.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Wed, 14 Dec 2005 07:29:35 -0700
Ketil Malde <ketil+news@ii.uib.no> writes:
Certainly a good point. There was a bit of commotion in the banking business here when a new bank was established without any office you could actually visit, or tellers, or anything - it's all on the net. I've happily used it for years now.

BTW, I'm fairly certain the deprecation of cheques, and introduction of ATM/debit cards have *reduced* fraud a lot, but I couldn't find the relevant statistics anywhere. Similarly, it wouldn't suprise me if the deprectaion of cash (even the cafeteria lets me pay with my card, it's only the coke vending machine left) to have that effect on crimes like pickpocketing and robbery, but that would be harder to measure.


part of the issue was that financial institutions rather than covering the costs of performing the operations with checks by charging the person ... was able to recoup the costs with the 5-7 day float. the issue with atm is that the transfer is immediate and there is no float. with check21 (fed. reserve mandate for electronic check imaging) and movement to same-day clearing, the float starts to disappear in checking also. at which point, with float disappearing, they will have to start charging for checks processing also.

there was big issue in e-check pilot by fstc (digitally signed electronic checks) ... whether the electronic check clearing (transfer of value) went thru the atm network (immediate transfer) or the ach network (overnight settlement). settling thru the ach network, allowed for a day's float. the problem was that consumers had gotten so used to the perception of "free" (with actual revenue for the bank coming from the lost interest to the customer thru the float) ... that it was felt that it wouldn't be possible to educate consumers on there really is no free lunch. simple economics ... doing an operation costs something. the revenue to cover the costs of that operation has to come from somewhere.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

3vl 2vl and NULL

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3vl 2vl and NULL
Newsgroups: comp.databases.theory
Date: Thu, 15 Dec 2005 14:28:13 -0700
"David Cressey" writes:
It's not commonly accpted. It's commonly accepted that, for every database that end up with "missing" values due to inapplicable cells, there exists an alternate design that would have expressed the same inapplicability by an omitted (not missing) row, rather than by an omitted value in an existing row.

What's not commonly accpeted is that the design that contemplates inapplicable data is "wrong".


another way of viewing is that there is a state diagram that involves how and/or/etc is implemented across the various state combinations. it doesn't necessary make it right or wrong ... it is just the way it is.

a reference to early posting about '92 data article on the state diagram. it is similar to truth table ... but since it involves states other than truth ... it is possibly more accurate to calle it a state table ... defining the logical operations for the various state combinations (not unlike how one might define logical operations on sets).
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
the sql state table from date's article


and   T   U   F        or    T   U   F       not
-----------------      ----------------      --------
T     T   U   F        T     T   T   T       T     F
U     U   U   F        U     T   U   U       U     U
F     F   F   F        F     F   U   F       F     T

some confusion comes in when people apply connotations to the different state labels and any belief they may have about how their beliefs relate to the different state labels

another possible view is that the implementation defines the states it has implemented and it defines how it implements logic operations on those states.

confusion can be aggravated when people's connotation beliefs result in a state table other than the one that is actually implemented. this in turn can result in unanticipated results when they are doing appilcations. in traditional programming languages when somebody writes program language statements that produce different results than what they expected ... the programmer is frequently blamed for not following the language implementation correctly. another approach might be to blame the language implementation for not being done in a way that corresponds to what the programmer believes to be correct

aka is it a feature of the language as opposed to a bug in the language? ... typically bugs are when things don't operate as specified. features are frequently when things operate as specified ... but possibly not as expected.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Thu, 15 Dec 2005 15:01:53 -0700
Del Cecchi writes:
Looks like thousands and thousands of folks. And their pins too. I hope that the contract with the issuer was nice and solid.

starting in the mid-90s, the financial standards x9a10 working group was giving requirement for new protocol to preserve the integrity of the financial infrastructure for all retail payments. this eventually resulted in the x9.59 standard
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

part of this was looking at threat models. earlier compromises had been lost/stolen and/or copied magstripes. the cards are a form of something you have authentication ... from the 3-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor

and issue was that it was becoming relatively easy to reproduce the information from a card magstripe and/or use lost/stolen cards. the primary countermeausre has been noticing fraudulent transactions ... and turning off that specific card/magstripe at the backend financial processor (from doing future fraudulent transactions).

pin-debit has been considered more secure because it represents two-factor authentication ... a combination of something you have and something you know. a basic principle of two-factor authentication is that the different factors are subject to different threats and vulnerabilities. however, both the card magstripe information and the pin are forms of static data ... and once harvested, are subject to fraudulent transactions. in a threat model, static data infrastructures are sometimes characterized as being subject to replay attacks (i.e attacker reproducing the static data).

in the time-frame of the early x9.59 work, compromised devices were starting to appear that skimmed (harvested) static information at the time of the transactions. this defeated basic principle of pin-debit two factor authentication since the skimming represented a common vulnerability to both the magstripe information as well as the pin information.

so an early objective of x9.59 work was to replace static data authentication with dynamic data authentication ... in part, as a countermeasure to the emerging skimming/harvesting threat/vulnerabilities from comprised devices.

part of this resulted in the x9.59 standard requiring that no part of a previous transaction can be used by crooks for fraudulent transactions. one common scenario has been crookks skimming an authenticated transaction and then using the skimmed account number information in a non-authenticated transactions. the x9.59 standard requirement, in turn implies that account numbers used in x9.59 transactions can't be taken and used in any kind of non-x9.59 transactions.

when we were doing the work on ssl for the original payment gateway and e-commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

the objective was to use cryptography to hide the account number so that it couldn't be skimmed and turned around and used in other (frauduelnt) transactions.

the x9.59 standard eliminates the need to hide the account number (using cryptography or any other means), since a crook can no longer skim/harvest any information from an x9.59 transaction (in whole or part) and turn around and use any of that information for a fraudulent transaction.

the requirement given the x9a10 working group for the x9.59 standard was to preserve the integrity of the financial infrastructure for all retail payments. the resulting standard eliminated skimming/harvesting (evesdropping on internet links, compromised atm or point-of-sale devices, data breaches of transaction logs, etc ) as a fraud threat/vulnerability. x9.59 didn't do anything about preventing skimming and/or harvesting happening ... it just eliminated that skimming/harvesting (of any information involving an x9.59 transactions) might result in a fraudulent transaction.

misc. collected posts on skimming/harvesting account numbers
http://www.garlic.com/~lynn/subintegrity.html#harvest

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Thu, 15 Dec 2005 16:23:25 -0700
ref:
http://www.garlic.com/~lynn/2005u.html#29 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2005u.html#31 AMD to leave x86 behind?

i can even bring this back a little bit to chips ... having given a talk on it at intel developer's forum a couple years ago
http://www.garlic.com/~lynn/aadsm5.htm#asrn1

which is somewhat the result of a semi-facetious joke I was making around the time we were starting the x9.59 work ... that I would take a $500 milspec (chip) part, cost reduce it by two orders of magnitude in order to make it more secure (eliminating unnecessary features not only reduced the cost but also improved the integrity/security) ... misc. stuff about aads chip strawman
http://www.garlic.com/~lynn/x959.html#aads

which was designed, in part, for use for x9.59 transactions.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

PGP Lame question

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PGP Lame question
Newsgroups: sci.crypt
Date: Fri, 16 Dec 2005 11:11:49 -0700
Stefan Tillich writes:
You don't need all three of the above to do authentication. In fact, biometrics is not (yet) very suited for a large class of real-world systems.

sorry ... it is about the 3-factor authentication *model*
http://www.garlic.com/~lynn/subintegrity.html#3factor
something you havesomething you knowsomething you are

within the 3-factor authentication *model* ... you can have one-factor authentication, two-factor authentication, and/or three-factor authentication ... and you can have different combinations of the factors. it is a *model* for helping thinka about countermeasures for risks and threats.

for instance, two-factor authentication is typically considered to have higher security than one-factor authenticaiton.

also, 3-factor authentication *model* by itself doesn't encompass all the factors that should be considered regarding threats and vulnerabilities.

for example, here is post from threat in comp.arch that has to managed to roam around the financial transaction landscape ... this specially is with regard to recent security breach involving financial transaction information
http://www.garlic.com/~lynn/2005u.html#31 AMD to leave x86 behind?

two-factor pin-debit (pin as something you know and card as something you have) has typically considered more secure than simple credit cards. in part, the something you know pin has been considered a countermeasure to lost/stolen card threat/vulnerability. however, there is something of an implicit assumption that the different authentication mechanisms have different/distinct vulnerability profiles.

however, effectively both the pin and the magstripe information (proof of having the card) are both forms of static data authentication ... and furthermore shared-secret, static data (other attributes, characteristics of authentication operations). static data typically also has replay attack vulnerabilities.

one of the issues is the spread of compromised devices in the 90s that recorded/skimmed static data (somewhat the subject of the security breach in the reference post) ... created opportunity for replay attack (aka fraudulent financial transations basically by replaying the skimmed information). one of the issues was that such compromised devices created a common recording/skimming vulnerability for both the something you know, static data pin and the something you have, static data card magstripe. This example sort of invalidates some possible implied assumption about two-factor authentication being more secure than one factor authentication (since the issue of the different authentication factors having different, unique vulnerabilities no longer existed).

other collected posts regarding secret vis-a-vis *shared-secret authentication information
http://www.garlic.com/~lynn/subintegrity.html#secrets

and various collected posts mentioning skimming/harvesting vulnerabilities of static data authentication information
http://www.garlic.com/~lynn/subintegrity.html#harvest

misc. collected posts about fraud, explots, threats, vulnerabilities
http://www.garlic.com/~lynn/subintegrity.html#fraud

recently there have been some threads in other venues about the use of SSL for hiding account numbers as part of the original e-commerce work ... a couple posts about working with this small client/server startup that had this technology they called SSL and wanted to be able to do payment transactions on their server
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

the issue there is that the account numbers are a form of "static data" that in some situations are sufficient for a crook to perform a fraudulent transactions ... and as a result the account number takes on the attributes of static data, shared-secret, something you know one-factor authentication (for those kinds of transactins). a related post describing sizing the risk related to exposing account numbers ... security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

In the above mentioned security breach post
http://www.garlic.com/~lynn/2005u.html#31 AMD to leave x86 behind?

there is discussion about the work of the financial standards x9a10 working group on the x9.59 protocol.
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

the x9a10 working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail transactions. It was in the time-frame that x9a10 work on x9.59 was intitially starting that some of the skimming/harvesting exploits were coming to the forefront. so one of the issues was how to create countermeasures for all retail payments to deal with the replay attack threats.

as described in the post, x9.59 basically introduced a form of dynamic data authentication so that skimming/harvesting of information related to x9.59 retail transaction could not be used (either in whole or in part) for performing fraudulent transaction.

one of the side issues was that x9.59 didn't do anything to hide the static data (like ssl cryptography was used to do), it just changed the paradigm from a static data authentication operation to a dynamic data authentication operation ... where the skimming/harvesting of the information would no longer result in (replay attack) fraudulent transactions. With that, it was no longer necessary to use cryptography as a countermeasure to replay attack fraudulent financial transactions ... since replay attack fraudulent financial transactions were no longer possible in a dynanic data authentication environment. The information gained from in-flight internet transactions, from compromised devices, and/or from data breaches of transaction logs were no longer useful in performing fraudulent financial transactions.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

PGP Lame question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PGP Lame question
Newsgroups: sci.crypt
Date: Fri, 16 Dec 2005 12:06:13 -0700
ref:
http://www.garlic.com/~lynn/2005u.html#33 PGP Lame question

for a little more drift posts from a different n.g. in thread regarding something you have authentication tokens:
http://www.garlic.com/~lynn/2005t.html#27 RSA SecurID product
http://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
http://www.garlic.com/~lynn/2005t.html#32 RSA SecurID product
http://www.garlic.com/~lynn/2005t.html#34 RSA SecurID product
http://www.garlic.com/~lynn/2005t.html#51 RSA SecurID product

and last couple even have even a little more cryptography
http://www.garlic.com/~lynn/2005u.html#26 RSA SecurID product
http://www.garlic.com/~lynn/2005u.html#27 RSA SecurID product

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

AMD to leave x86 behind?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMD to leave x86 behind?
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch
Date: Sat, 17 Dec 2005 09:34:33 -0700
Bernd Paysan writes:
Banks don't charge for that here, they give you a discount. If you don't do paper-work transactions, it saves money for the bank. I expect my bank to give me free transactions on whatever medium they provide, be it check, debit card, or internet. As long as they have my money for some time, they could make a profit out of that, which should be good enough for them.

Banks here charge for using paperwork. Not all, not all to the same extend, but they somehow force you to use electronic transactions, because they can reduce costs that way.


... it cost banks less money ... reminds me of the tv commercials about buying things on sale ... and all the money you save (especially things that you might not otherwise buy ... but, what the heck, look at all the money you are saving).

not quite as bad as the story about company loosing $5 on every item sold ... but they were planning to make it up in volume.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Mainframe Applications and Records Keeping?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Newsgroups: bit.listserv.ibm-main
Date: 17 Dec 2005 16:39:18 -0800
John S. Giltner, Jr. wrote:
It's not so much that it is insecure, but that it is new. Linux is only 10-15 years old. You have to realize that some mainframe system have been running applications that were designed and written over 30 years ago. They may have thousands of programs written in mainframe assembler, PL/I, Cobol, and other languages and you can't just convert that over night have have it run the same. You have to realize that today IBM mainframe OS's have their roots from OS's that are 40 years old.

minor reference:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

nearly 40 years old ... and its the new, new thing in chip technology this season.

Mainframe Applications and Records Keeping?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Newsgroups: bit.listserv.ibm-main
Date: Sun, 18 Dec 2005 11:11:17 -0700
as400 wrote:
Yes!! I am familiar with Selinux..

But for now, I wish I knew what dispatchers use to retreive our information when given our Drivers license #. What Mainframe application is use to store all of that data? I am talking about informations records when run a drivers license # check to verify if no warrants are issues and etc? Do they use the DB2 or the IMS applications for that purpose?

Thanks...Nice article on the SElinux by the way...I will continue reading about it.


the ref in the previous post
http://www.garlic.com/~lynn/2005u.html#36 Mainframe Applications and Records Keeping?

to post from selinux archives:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

was that there has been other security technology (other than selinux) that was widely used by various gov. agencies starting nearly 40 years ago (they were some of the early adopters) ... and that this nearly 40 year old (mainframe) technology is starting to see a resurgance as the new, new security thing.

for some topic drift ... at one point there were proposals from the early-90s that chipcards with all that information replace current drivers licenses ... megabytes of memory crammed into the plastic thing in your wallet. the proposals assumed that they could put some certified bits in the plastic thing ... and that when stopped the law enforcement officer, rather than doing a real-time, online check, would instead check the information resident in your card. the issue then became if the information in the card was stale and the officer would have to do the real-time check anyway ... what possible information would be online and therefor didn't need to be in the card (or conversely what information could there be in the card that wouldn't be found online).

for your question ... if they are legacy apps ... they still may use IMS ... one large financial transaction network, a couple years ago claimed they attributed 100 percent availability to
• ims hot-standby
automated operator


of course, ims hot-standby was one of the earliest adopters of my wife's peer-coupled shared data architecture from when she did her stint in POK in charge of loosely-coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata

then there was all the news stories that were swirling around when cal. DMV decided to switch off ibm mainframes to another vendor.

when we were doing ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

we coined the terms disaster survivability and geographic survivability to differentiate from simple disaster/recovery. we were then invited to do a section in the corporate continuous availability strategy document ... however, much of it was subsequently removed when both rochester and pok complained that they wouldn't have such features for quite some time.

...

random selection of news URLs from last week on the new, new 40-year thing
http://biz.yahoo.com/prnews/051212/sfm116.html?.v=24
VMware Delivers VMware Player
http://ne.sys-con.com/read/162108.htm
Xen Open Source Project Readies Version 3.0
http://ne.sys-con.com/read/163887.htm
OpenVZ Project Introduces Open Source Virtualization Website
http://www.businesstodayegypt.com/printerfriendly.aspx?ArticleID=6204
Intel Virtualization Technology (VT) Explained
http://www.businessweek.com/the_thread/techbeat/archives/2005/12/vmware_goes_mai.html
VMware goes mainstreamss
http://www.computerworld.com.au/index.php/id;351307096;fp;16;fpid;0
VMware releases Player, teams up with Mozilla
http://www.computerworld.com/hardwaretopics/storage/story/0,10801,106856,00.html
Council moves on virtualization, IP telephony
http://www.computingchips.com/new/viewArticle.php?article_id=4076
VMware Delivers VMware Player
http://www.crn.com/sections/breakingnews/breakingnews.jhtml?articleId=175001992
Open Source, Software | SWsoft Backs OpenVZ.org Open-Source Virtualization
http://www.crn.com/showArticle.jhtml?articleID=175001992
Open Source, Software | SWsoft Backs OpenVZ.org Open-Source Virtualization
http://www.d-silence.com/story.php?headline_id=22120&comment=1
VMware Delivers VMware Player
http://www.dqchannels.com/content/reselleralert/105120916.asp
Intel debuts virtualization support for desktop chip
http://www.hardwarezone.com/news/view.php?id=3321&cid=11
http://www.informationweek.com/news/showArticle.jhtml?articleID=175001755
Virtual-Machine Player | VMware Partners With Mozilla On Virtual-Machine Player
http://www.infoworld.com/article/05/12/12/HNvmwareplayer_1.html
VMware releases Player, teams up with Mozilla
http://www.internetnews.com/ent-news/article.php/3570381
Free VMware Player Ready to Go
http://www.linuxelectrons.com/article.php/20051214114500976
OpenVZ Project Introduces Website to Support OS Virtualization Technology
http://www.linuxpr.com/releases/8325.html
Linux PR: OpenVZ Project Introduces Website Support Operating System Virtualization Technology for Open Source Community
http://www.linuxworld.com.au/index.php/id;351307096;fp;2;fpid;1
VMware releases Player, teams up with Mozilla
http://www.newsfactor.com/news/Open-Source-Firm-Challenges-VMware/story.xhtml?story_id=0330012RR2N6
Open-Source Firm Challenges VMware
http://www.newsfactor.com/news/Open-Source-Firm-Challenges-VMware/story.xhtml?story_id=1000039HJUX4
Open-Source Firm Challenges VMware
http://www.osnews.com/story.php?news_id=12928
What Is Virtualization
http://www.osnews.com/story.php?news_id=12952 VMware Player 1.0 released
http://www.osnews.com/story.php?news_id=12977
Xen Virtualization Quickly Becoming Open Source 'Killer App'
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/12-12-2005/0004232349&EDATE=
VMware Delivers VMware Player
http://www.sci-tech-today.com/news/Open-Source-Firm-Challenges-VMware/story.xhtml?story_id=033003ON3G9U
Open-Source Firm Challenges VMware
http://www.techweb.com/wire/software/175001681

Mainframe Applications and Records Keeping?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Newsgroups: bit.listserv.ibm-main
Date: 19 Dec 2005 09:05:10 -0800
Tom Schmidt wrote:
One thing for you to consider, R.S., is that Poland is approximately the size of Wisconsin. Our usage extends beyond our borders - worldwide, actually. Because of the reach across many timezones we are more likely to see substantial variability in usage from Linux instance to Linux instance. If you only work within one timezone you might not find the same opportunities.

this is similar to the vm operational data from the late 60s and early 70s. there were lots of share presentations about peak useage spikes ... typically between 10-11 in the morning and 2-3 in the afternoon.
http://www.garlic.com/~lynn/submain.html#timeshare

the internal HONE system saw similar characteristics in the early to mid 70s ... i.e. HONE was vm platform providing sales, marketing, and field support (eventually world-wide with datacenters all over the world; by the mid-70s, mainframes were getting so complex, that orders couldn't even be manually created but had to be run thru hone applications).
http://www.garlic.com/~lynn/subtopic.html#hone

some of the vm commercial time-sharing services had acquired international market by the mid 70s and so the peaks periods start to have a wider spread. the international clients by the mid-70s also started to create 7x24 demand ... and normal mainframe PM/service becameing real hassle. by the mid-70s some of the commercial time-sharing services had made extensive availability enhancements to vm .. lots of clustering support, including being able to migrate live workload/sessions off a processor complex that was scheduled for service/pm downtime.

in the late 70s, the US hone operations consolidate their vm datacenters in northern cal. with several vm cluster operation enhancements (possibly considered the largest single system system operation in the world at the time). the consolidated us hone operation then started to see the morning and afternoon peak useage "rolling" across the US time-zones ... eastern timezone morning peak started at 7am at the cal. datacenter and then the morning peak useage rolled across the times zone until the end of the 10-11 pacific morning peak was dropping off ... just in time for the eastern timezone afternoon peak to pick up.

in the very early 80s, the issue of the effect of local natural disasters on availability resulted in the US HONE complex being replicated ... first with one in Dallas and then with one in Boulder (eventually with load-balancing and fall-over between the centers).

Sometime in the early 80s, I believe Jim Gray had published a paper that hardware failures were no longer the major contributing factors in system available. by that time, Jim had already left research and the system/r project (original relational/sql implementation done on a vm platform)
http://www.garlic.com/~lynn/submain.html#systemr

and had moved on to tandem.

... for some drift ... one might claim that my work on supporting hone and their clustering and availability requirements ... and my wife having served a stint in pok in charge of loosely-coupled architecture and having created peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

contributed heavily to what we did later with ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

.. as well as having done some of the work on system/r ... helped with doing global lock manager and scale-up work for ha/cmp ... minor ref
http://www.garlic.com/~lynn/95.html#13

Mainframe Applications and Records Keeping?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Newsgroups: bit.listserv.ibm-main
Date: Mon, 19 Dec 2005 13:54:49 -0700
Anne & Lynn Wheeler wrote:
the internal HONE system saw similar characteristics in the early to mid 70s ... i.e. HONE was vm platform providing sales, marketing, and field support (eventually world-wide with datacenters all over the world; by the mid-70s, mainframes were getting so complex, that orders couldn't even be manually created but had to be run thru hone applications).
http://www.garlic.com/~lynn/subtopic.html#hone


ref:
http://www.garlic.com/~lynn/2005u.html#38 Mainframe Applications and Records Keeping?

one of the severe resource drains on the hone effort during this period was that the company was telling the customers that mvs was capable of doing everything for everybody and going to extremes to obfuscate that internally significant portions of the company ran on vm.

hone was part of the data processing division (sales & marketing) and periodically some branch office manager would get promoted to executive position heading up hone. at some point the new executive (who had been thoroughly indoctrinated in the branch office) would find out that hone was mostly vm ... and decide that they would make a name for themselves by converting all of hone from a vm-base to a mvs-base.

the hone technical people would them be instructed to drop what they were doing and convert everything to a mvs-base. usually after 6-9 months it would become evident to everybody that it was impossible and all references to the activity even being attempted would be destroyed (it was better to pretend it was never attempted ... than to attempt it and fail).

after a couple years, there would be new promotions and the mvs-conversion cycle would repeat. any references to prior attempts would be discounted by making statements of the ilk: mvs has significantly matured since the last attempt ... or some such thing.

POWER6 on zSeries?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: POWER6 on zSeries?
Newsgroups: bit.listserv.ibm-main
Date: 19 Dec 2005 15:11:01 -0800
dkent@ibm-main.lst (Dean Kent) writes:
Hopefully not far off topic, but this recently published article indicates that IBM may merge i, p and z series processors using POWER6. At one time I recall seeing an official IBM statement saying that this would never happen because of the unique requirements for (at that time) S/390 Architecture. Any thoughts on whether this is pure speculation, or if it is a real possibilty due to economics, advances in technology or perhaps a desire to make zSeries less costly and broaden the market? Could this result in affordable desktop mainframes (so we don't need to worry about whether IBM will ever allow zOS to run on Hercules)? ;-)

http://www.realworldtech.com/page.cfm?ArticleID=RWT121905001634


can you say fort knox?

1980 there was effort afoot to convert much of the internal microprocessors, controllers to 801. the follow-on to the 4341 was going to be an 801. the issue at the time was that there was a huge number of different microprocessors all with different architectures and programming. the convergence to 801 was to eliminate a lot of the different variety. the issue for the low and mid-range 370 at the time was that they they implemented 370 in microcode with something like an avg. of 10:1 microcode instructions per 370 instruction (not all that different from the current generation of software implementing mainframe architecture on intel platforms).

this was something of the ecps boost for vm on the 148 & 4341 ... moving high use kernel paths into microcode ... with a resulting 10:1 performance boost (and much greater for some involving instruction simulation and not having to save/restore registers as part of context switch between user mode and kernel mode) ... a couple more detailed discussion of ecps:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

at least for the 4341, the project was aborted. the issue being that chip silicon was progressing to the point where it was possible to directly implement 370 architecture in hardware (as opposed to having much simpler hardware architecture that in turn was progremmed to implement more complex 370 architecture). as a result the 4341 following became a a direct 370 silicon implementation rather than 801 chip with microprogramming implementing 370.

note that 370 instructions running close to hardware speed was already coming close to happening with high-end machines like 3033.

another aspect of this presented itself with amdahl's initial hypervisor implementation. one of the issues for the high-end machines were that they tended to be horizontal microcode which was significantly harder program than veritical microcode and/or 370. even tho 370 instructions were coming close to running at hardware speed, the high-end machines were still (horizontal) microcode and had reputation for being extremely hard to program.

i gave a presentation at baybunch about the ecps experience ... that while there was a 10:1 performance gain moving kernel code directly into microcode (on 148 & 4341) ... there was even large performance pickup by not having to context switch into the vm kernel (save/restoring registers, etc).

amdahl had previously created "macro" code for their high-end machines ... basically part of the hardware context ... but using a subset of 370 programming (rather than the much more difficult to program horizontal microcode). several people then implemented a subset of virtual machine function as hypervisor support ... with the implementation being done using "macrocode".

ibm eventually responded with pr/sm (on 3090) and then expanded it to multiple with LPARs (logical partitions). however, this was a much more difficult undertaking ... requiring implementation done directly in the 3090 microcode.

and as previously noted in some recent posts in other threads in this n.g. ... virtualization is the new, new thing this season.
http://www.garlic.com/~lynn/2005u.html#36 Mainframe Applications and
Records Keeping?
http://www.garlic.com/~lynn/2005u.html#37 Mainframe Applications and Records Keeping?

an 801 from that period that did survive was romp ... which was research, office products effort to build a displaywriter follow-on. when that was killed, it was decided to retarget the machine to the unix workstation market ... and that company that had done the port for the ibm/pc pc/ix was hired to do a similar port for romp, which was called aix. the followon to romp was rios, or power.

misc. past 801, romp, rios, power, etc postings
http://www.garlic.com/~lynn/subtopic.html#801

Mainframe Applications and Records Keeping?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Newsgroups: bit.listserv.ibm-main
Date: 19 Dec 2005 17:10:24 -0800
as400 wrote:
I know that the City of Los Angeles is upgrading their Mainframes (I think one of them) to a Sun system running Solaris...Whats very suprising to me, Is that I did not know that a Sun Solaris maingrame system would be running DB2....

I thought DB2 was only made for a IBM Mainframe unless these report statistics are wrong...I got these reports...Take a look...Not much details of course..But at least I can get an idea of what their running...

http://www.lacity.org/ita/itain2b.htm

As for the DMV databases...Can anyone know what their running?? Because I always wonder how a person would backup EVERY AND EVERY EVERY information about us (your, ours, mine and everyones) would be backed up....And I feel sorry for the person who does it...

I know that the City of Los Angeles has a IBM Robotic Tape Backup System...which costs the City MILLIONS of dollars to dish out by the way...

Take a look at the link I posted above on this post...Seems very interesting..


there was the tech. transfer of system/r (original relational/sql dbms done on vm) from research to endicott for (vm) sql/ds
http://www.garlic.com/~lynn/submain.html#systemr

one of the people in this meeting
http://www.garlic.com/~lynn/95.html#13

commented that they had handled much of the tech. transfer from endicott back to stl for db2.

can you say shelby? ... toronto lab. was given job of writing a relational database system in C for os2. ported then to aix3.2 (crosswinds?). eventually ported to other unixes ... and called db2.

i vaguely remember toronto lab. trying to recruit people from univ. of toronto that understood relational database systems.

previous post mentioning shelby
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql

random references from around the web curtesy quick use of search engine:
DB2 Universal Database for Linux, UNIX and Windows
http://www-306.ibm.com/software/data/db2/udb/support/downloadv7.html
Is DB2 Right for You?
http://www.windowsitpro.com/Windows/Article/ArticleID/2864/2864.html
A quick overview of the different flavors of DB2, what they're good for, and where you'll find resources to help you learn and leverage.
http://www.devx.com/ibm/Article/17647


..
DB2 Is the Next Logical eServer Convergence
http://www.itjungle.com/tfh/tfh020705-story01.html


from above:
IBM has at three distinct relational databases sold under the umbrella name of DB2: one for OS/400 servers, one for mainframe servers, and another for Windows, Unix, and Linux servers. Having watched the eServer hardware platform consolidation, which could result in the mainframe merging into the Power server line in coming years, I'm wondering if DB2 is next. I think it's time for IBM to revisit its relational database past, and dig deeper into the first such software it ever sold: the integrated database in the System/38 and the AS/400.

.. snip ...

so why is it called "DB2" ... possibly because eagle was going to be called "DB1" (and canceled before it is ever shipped) ... some discussion of eagle, system/r, and db2 at the system/r reunion in 1995
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html

in the above ... a name of a person mentioned in this post
http://www.garlic.com/~lynn/95.html#13

also shows up (with regard to db2). the reunion reference also mentions a lot of the people hired away to tandem.

the above reunion article also mentions how expensive 3270 terminals were. they were perceived to be expensive capital equipment (and internally required vp executive approval to order). one of the things that helped break that impression was that we did a 3-year, fully depreciated analysis for 3270 terminals which came out to be about the same as monthly cost of business phone (which went on every employees desk as a matter of course ... and you actually had 3270 terminls doing service for 10 years or more).

it also mentions Jim Gray's MIPENVY article, random past posts on the subject:
http://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
http://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
http://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS
http://www.garlic.com/~lynn/2004l.html#28 Shipwrecks
http://www.garlic.com/~lynn/2004l.html#31 Shipwrecks
http://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns

note this earlier eagle reference ... is different than the project eagle mention here for 1996 announce:
http://www.dbmsmag.com/9604d13.html

Mainframe Applications and Records Keeping?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Applications and Records Keeping?
Date: Mon, 19 Dec 2005 21:59:30 -0700
Newsgroups: bit.listserv.ibm-main
as400 wrote:
And regarding the DMV's database..Heres some information..Its only an a diagram..Nothing detailed..
http://www.byte.com/art/9602/img/508003c2.htm

Oh...I gave you a the City of Los Angeles's Database tier reports...You have to click on REPORTS and then IT PREFFERED STANDARDS...And then you can see what their using for their Mainframes..Its interesting.


here are pieces of past thread on the subject ... this first ref. actually stays on topic (possibly pull the rest from google's usenet achive)
http://www.garlic.com/~lynn/2001.html#62 California DMV

these then drift some
http://www.garlic.com/~lynn/2001.html#65 California DMV
http://www.garlic.com/~lynn/2001.html#68 California DMV
http://www.garlic.com/~lynn/2001.html#72 California DMV

this talks about a number of projects that ran into problems (including cal. dmv project)
http://web.mit.edu/afs/athena.mit.edu/user/other/a/Saltzer/www/publications/Saltzerthumbnails.pdf

more about IT projects that run into problems
http://www.ctg.albany.edu/publications/guides/smartit2?chapter=3&PrintVersion=2

mention in risk digest
http://catless.ncl.ac.uk/Risks/10.47.html#subj1
http://catless.ncl.ac.uk/Risks/15.80.html#subj1
http://catless.ncl.ac.uk/Risks/16.07.html#subj10

a couple posts in the follow thread that drifted into topic of development process, development standards, etc (totally coincidental that it starts with questions about tandem)
http://www.garlic.com/~lynn/2005u.html#5 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005u.html#17 What ever happened to Tandem and NonStop OS ?

POWER6 on zSeries?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: POWER6 on zSeries?
Date: Tue, 20 Dec 2005 08:54:36 -0700
Newsgroups: bit.listserv.ibm-main
Shmuel Metz , Seymour J. wrote:
Wasn't that already true on the older 360/85?

i did some work with a 165/168 processor engineer ... he said that one of the things for the 165->168 transition was that they reduced the avg. machine cycle time per instruction from avg 2.1 machine cycles to 1.6 machine cycles (besides 168 having faster memory and misc other things).

the 3033 started out being 168 wiring/logic diagram mapped to chip technology that was about 20percent faster and having about ten times the circuits per chip (but originally additional circuits unused). late in the process, there was work on optimizing 3033 logic to better utilize the additional circuits (increase on-chip operation) ... which eventually resulted in 3033 shipped to customers being about 50percent faster than 168.

there was some mvs kernel performance assist microcode done for 3033 ... somewhat analogous to vm ecps originally done on 148 ... but it was difficult to actually demonstrate much performance improvement because 370 instructions already running at or close to hardware speed.

sie instruction was introduced for 370/xa with 3081 .. but still required the vm kernel. there was vm assist introduced on 158 & 168 ... which still used normal 370 instructions to switch from kernel to virtual machine mode. vm ran the virtual machine in problem mode and all supervisor state instructions interrupted back into the kernel. the vm assist had pointer loaded into control register 6 (otherwise unused). now some supervisor instructions ... when running in problem mode ... would first check cr6, and instead of interrupting into the kernel would execute according to virtual machine architecture rules (rather than real machine architecture rules). ecps done for the 148 ... besides moving pieces of the kernel into native microcode ... also added additional supervisor instructions that would execute under virtual machine architecture rules.

sie instruction, intorduced for 370/xa on 3081 ... replaced the whole mechanism switching from vm kernel to virtual machine mode (and back) as well as providing indications to the hardware that the machine was running in virtual machine mode. Essentially, it created three machine modes, supervisor state, problem state, and virtual machine supervisor state. note, however, the sie instruction still required vm kernel to operate.

amdahl hypervisor effectively ran operating system somewhat akin to current lpar ... w/o requiring vm kernel to handle various functions ... effectively enuf of the vm kernel functions had been moved into the hardware (in this case, macrocode) that a vm kernel wasn't required to run in hypervisor mode.

there are two issues here:

1) straight-line replacement of kernel instructions with kernel microcode ... this provides performance boost if the native machine speed is much faster than the 370 speed (i.e. collapsing high number of microcode instructions per 370 instruction into single microcode instruction). this sees much less benefit if 370 instructions are running at nearly hardware speed.

2) elimination of state change, register saving/restoring, etc ... switching back and forth between kernel mode and virtual machine for supervisor instruction simulation. if there is a 3rd mode where supervisor instructions are directly executed according to virtual machine architecture supervisor state rules (rather than real machine supervisor state rules). the amdahl macrocode mode made it easier to do write the programming for hardware virtual machine mode ... implementing hypervisor. there is significant savings in operations needed to be performed ... even on machines where 370 instructions run at hardware speed.

misc. past postings mentioning macrodoe
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2002p.html#48 Linux paging
http://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
http://www.garlic.com/~lynn/2005p.html#14 Multicores
http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?

misc. past posts mentiong 165, 168, etc machine cycle
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
http://www.garlic.com/~lynn/96.html#23 Old IBM's
http://www.garlic.com/~lynn/99.html#116 IBM S/360 microcode (was Re: CPU taxonomy (misunderstood RISC))
http://www.garlic.com/~lynn/2000d.html#61 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000e.html#54 VLIW at IBM Research
http://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001c.html#1 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2002.html#48 Microcode?
http://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
http://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
http://www.garlic.com/~lynn/2002l.html#2 What is microcode?
http://www.garlic.com/~lynn/2002m.html#68 Tweaking old computers?
http://www.garlic.com/~lynn/2002m.html#75 New Book
http://www.garlic.com/~lynn/2002n.html#23 Tweaking old computers?
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
http://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2005e.html#59 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor

POWER6 on zSeries?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: POWER6 on zSeries?
Newsgroups: bit.listserv.ibm-main
Date: 20 Dec 2005 09:19:53 -0800
Anne & Lynn Wheeler wrote:
there was some mvs kernel performance assist microcode done for 3033 ... somewhat analogous to vm ecps originally done on 148 ... but it was difficult to actually demonstrate much performance improvement because 370 instructions already running at or close to hardware speed.

3033 had several issues ... cluster of six 4341s were about the price of a 3033, had higher aggregate mip rate, aggregate of 96mbytes of real memory and aggregate of 36 channels. the 16mbyte real storage constraint on 370 ... was possibly one of the reasons for the 32mbyte real storage hack for 3033. it is still 370 16mbyte addressing ... but the 370 page table had 16bits ... 12bits to address up to 4096 4k pages (16mbytes), an invalid bit, a software use bit, and two undefined bits. the 3033 hack scavenged one of the two undefined bits to provide 13bits to address up to 8192 4k pages (32mbytes real). this allowed virtual memory pages to reside above the 16mbyte line ... even tho there was still only 24bit (virtual and real) addressing. there was then the kernel hack to move virtual page from above the 16mbyte line to below the line when operations needed to be performned on it.

part of this was starting in the mid-70s ... typical systems were starting to shift from processor/memory constrained to i/o constraied. as a result, real storage was starting to be used more and more to mitigate the i/o constraint.

part of the problem was the trade-off decision for os/360 using multi-track search in the mid-60s. the issue was that they could trade-off the excess i/o capacity against having indexes resident in constrained real memory.
http://www.garlic.com/~lynn/submain.html#dasd

by the late 70s there was numerous situations where the multi-track search was severely blowing what limited i/o capacity there was. a big contrast was that vm, cms, cms filesystem, etc had always used logical fba ... even when dealing with ckd dasd. sjr/bldg28 had especially extreme situations highlighting the problem ... sjs/bldg28 was where original relational/sql was developed on vm
http://www.garlic.com/~lynn/submain.html#systemr

for a time after the sjr 370/195/mvt system had been replaced ... there was a mvs/168 running in shared dasd configuration with vm/158. because of the severity of mvs multi-track search had on normal cms performance .. there was s standing rule that mvs packs would never be mounted on vm drives/controllers (multi-track search busied the channel, controller, and drive). one day, an operator mistakenly mounted a mvs 3330 pack on a vm drive/controller. within five minutes, cms users were phone the computer room complaining about extremely degraded response time degradation (note that TSO users never learned what good response was ... because TSO never operated w/o the underlying mvs operating system). there was a demand that the mvs pack be immediately moved to a non-vm drive. the mvs operator's declined. so the vm group mounted a specially tuned vs1 (that ran under vm) pack on a "mvs" drive and started up a specially constructed mutli-track search application. This had the effect of significantly slowing down the mvs system (vs1 running under vm on 158 vis-a-vis mvs running on 168). the mvs operators immediately moved the mvs 3330 pack off the vm drive/controller to a mvs drive and promised never to make that mistake again.

a few past post mentioning the 4341/3033 rivalry issue (there was even an internal politic incident where 3033 group attempted to have the allocation of a critical 4341 component cut in half).
http://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
http://www.garlic.com/~lynn/2001m.html#15 departmental servers
http://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#1 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2005p.html#19 address space
http://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3

at one point, i produced a paper claiming that the relative system performance of disk had declined by a factor of 10 times over a period of 10-15 years (contributing to the use of excess real storage as mitigation for i/o constraint). the disk division got upset and assigned their performance group to refute the statement. after several weeks, they came back and said that i had slightly understated the issue. this eventually turned into a user group presentation by the disk division about how to optimize disk performance. the issue was that while disk performance had improved over the period .. it had improved less than performance of other system components ... leading to a decline in relative system performance for disk.

misc. past posts on the subject:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad

IBM's POWER6

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's POWER6
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 21 Dec 2005 08:29:32 -0700
"Stephen Fuld" writes:
One comment. You talk about using technology similar to Transmeta's JIT mechanism to translate Z series code to Power code. But IBM has some freedom that apparently Transmeta, with its restriction of a vliw machine as the target, didn't seem to have. Specifically, it could add logic similar to what Intel does, and on the fly "decode"/ translate the Z series code to Power code, perhaps with the addition of several otherwise unused "special" power instructions to aid performance. Didn't one of the early AMD pentium compatible chips actually translate into 29K instructions? Or, they could do something like what ARM is doing with their jazelle technology to almost directly execute java byte code. It directly executes some instructions by translating them "on-the-fly" into ARM instructions (for the simple ones), and has some kind of "escape" mechanism to go to a routine for interpretive execution of the complex ones. Of course, since IBM controls the compilers, it could have a version that "knew" what instructions were executed directly and preferentially generate code for them for higher performance on the new systems (again, something Transmeta couldn't do.).

Do either of these make sense as a potential for IBM? I would guess that if they did, it would produce a higher performance product than they would get with a software JIT system.


note, there was a group looking at this during fort knox time-frame ... 1980. there were huge number of microprocessors inside the company, used for controllers, devices, low & mid-range 370s, s38/as400, etc. the proposal was to move all of these to 801.

low & mid-range 370s were microprocessors of various kinds with 370 implemented as microcode. these machines avg. about 10 microcode instructions per 370 instruction. we had taken advantage of this for ecps ... which migrated 6k of high-use kernel code into microcode. the migrated code about a 10:1 speedup (originally for 148 and then on 4341). i helped with an analysis that killed the 801 use for the 4341 followon. the alternative was that chips were advancing to the point where you could get much of 370 instructions directly in silicon ... which was faster than using 801 to emulate 370.

at the time, there was both a jit project for 370->801 ... and sort of a more advanced version of ecps ... where portions of 370 code would be recompiled to 801. i got involved because in the early 70s, i had written a pli program that analyzed 370 assembler listings in various ways, including generating a high-level language abstraction of what the assembler code was doing (also detailed control flow, register useage, etc).

the project using 801 for 4341 followon was canceled ... and the 4381 was much more of a native silicon implemention.

a 801 project that did survive was ROMP ... a joint research, office products project to use 801/romp for a displaywriter followon. when that got killed, it was decided to retarget the machine to the unix workstation market ... hiring the company that had done the pc/ix port (for the ibm/pc) to do one that came to be called aix. the romp followon was rios/power.

misc. 801, romp, rios, fort knok, etc collected posts
http://www.garlic.com/~lynn/subtopic.html#801

recent postings in similar thread in mainframe n.g.
http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Channel Distances

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: alt.folklore.computers
Date: Wed, 21 Dec 2005 11:06:40 -0700
David Dyer-Bennet wrote:
NSC being Network Systems Corporation. Acquired by Storagetek in the 90s. Storagetek was acquired by Sun last year.

NSC got killed by standards-based networking, essentially. Their original products were doing cross-platform networking before there were standards (or at least implementations) in place to do it, but after awhile the real world came along and swept them away.

I turned down the chance to work for them in HyperBus development (proprietary competition with Ethernet, after Ethernet was well established as a standard), did some contract work for them, and then did end up working for them as they tried to become a router company.


i still have misc. nsc manuals (including hyperbus) somewhere in the basement. the nsc a720 adapters were specifically designed for a project my wife was running ... this was after she left pok, having done a stint as head of loosely-coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata

which we later used as part of high-speed data transport project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

hsdt eventually collected some number of nsc adapters. sometime later, we donated a number of them to the UT balcones supercomputing center.

as mentioned previously, i had done the rfc1044 implementation for the standard mainframe tcp/ip product. the standard product would just about consume a 3090 processor getting 44kbytes/sec thruput. in some tuning done at cray research, we hit 1mbyte/sec sustain between cray and 4341-clone (4341 channel interface media speed) using only a modest amount of the 4341 processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

he is a little modest about it ... my characterization is that one of the nsc people moved to the west coast and invented what was later to be called vpn for his own use for link back to nsc hdqtrs. he introduced it at gateway working group meeting at fall '94 san jose IETF meeting.

in dec, after that meeting, one of the other router vendors announced product support for something that was supposedly similar which involved external hardware link encryptor boxes.

i've commented frequently that both vpn and ssl came on the scene because ipsec involved updating all the (kernel) ip-protocol stacks for end-to-end encryption. both vpn and ssl left the underlying ip-protocol stacks untouched. my view at the time was this upset some number of the ipsec crowd ... and they eventually came to grips by referring to vpn as light-weight ipsec (and others starting to refer to ipsec as heavy-weight ipsec).

past posting ref working with a small client/server startup company that wanted to do payment transactions on their server ... they had this technology called ssl ... that work is now sometimes referred to as e-commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

The rise of the virtual machines

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The rise of the virtual machines
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Wed, 21 Dec 2005 11:38:35 -0700
the new, new, 41yr old thing
The rise of the virtual machines
http://searchdatacenter.techtarget.com/originalContent/0,289142,sid80_gci1153323,00.html

If there is one technology that took hold in the enterprise in 2005, it's virtualization. The software moved from test and development and into the data center faster than anyone imagined, including the experts. And while there are sure to be some bumps in the road in 2006 as the technology moves from awareness to adoption, you can bet most IT pros will be running virtualization in the data center before the end of next year.

... snip ...

melinda's vm history paper at
http://www.leeandmelindavarian.com/Melinda/

a couple footnotes (from above):
24 Creasy had decided to build CP-40 while riding on the MTA. ''I launched the effort between Xmas 1964 and year's end, after making the decision while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I believe.'' (R.J. Creasy, private communication, 1989.)

25 R.J. Creasy, General Description of the Research Time-Sharing System with Special Emphasis on the Control Program, IBM Cambridge SR&D Center Research Time-Sharing Computer Memorandum 1, Cambridge, Mass., January 29, 1965. L.W. Comeau, The Philosophy and Logical Structure of the Control Program, IBM Cambridge SR&D Center Research Time-Sharing Computer Memorandum 2, Cambridge, Mass., April 15, 1965.


... snip ...

minor drift:
http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

POWER6 on zSeries?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: POWER6 on zSeries?
Newsgroups: alt.folklore.computers, bit.listserv.ibm-main
Date: Thu, 22 Dec 2005 09:16:51 -0700
Shmuel Metz , Seymour J. wrote:
I still have the manual.

ref:
http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?

there is some indication that this (mvs performance assist on 3033) may have been the early driving factor behind amdahl's development of macrocode mode. the issue is that horizontal microcode programming (on the high-end machines) is significantly much more complex than 370 programming. by this time (at least), 370 instructions (on the high-end machines) were running at direct hardware speed ... so it was difficult to demonstrate any of the ECPS-like speed-ups that we obtained from doing simple migration of 370 kernel pathlength into microcode on the low/mid-range 370s (where it avg. ten vertical microcode instructions per 370 instructions ... and direct 370->microcode conversion achieved 10:1 speedup on 148 and 4341).

the mvs "performance assist" for the 3033 appeared somewhat to be arbitrary changes to the architecture/features of the hardware (since it was difficult to demonstrate any actual performance improvement of simply changing 370 instructions into microcode instructions). if that was going to be how the game was to be played ... amdahl was going to need to get much more efficient at playing the game and tracking architecture/feature changes. macrocode mode provided all the hardware feature appearnce of programming in microcode but with the productivity (and elapsed development time) of programming in 370.

once that level of machine feature delivery efficiency was obtained ... it was relatively easy to do additional features like hypervisor support ... restricted virtualization subset that didn't require the vm kernel. the response was pr/sm on 3090 which eventually evolved into the current lpar capability.

for a little drift, comp.arch n.g. has had a similar discussion thread
http://www.garlic.com/~lynn/2005u.html#45 IBM's POWER6

and a few comments on a online article that appeared yesterday about the 41yr old, new, new thing
http://www.garlic.com/~lynn/2005u.html#47 The rise of the virtual machines

Channel Distances

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: alt.folklore.computers, bit.listserv.ibm-main
Date: Thu, 22 Dec 2005 09:40:10 -0700
Peter wrote:
Quoting maximum packet rate isn't unreasonable. It's not just the physical carrier that's a limitation. There might not be enough CPU grunt to saturate the link with minimum-sized packets, for example.

ref:
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances
http://www.garlic.com/~lynn/2005u.html#23 Channel Distances
http://www.garlic.com/~lynn/2005u.html#46 Channel Distances

but there is hardly a vendor out there that doesn't quote media rate ... and there can be enormous number of reasons why sustained thruput would be less

the original vendor mainframe tcp/ip product achieved 44kbytes/sec and burned a full 3090 processor. i added rfc1044 support to the product
http://www.garlic.com/~lynn/subnetwork.html#1044

and in some tuning work at cray research, we got 1mbyte/sec sustained between a cray and a 4341-clone ... using only a modest amount of a the 4341-clone processor. the 1mbyte/sec was the channel media speed between the 4341 channel and the NSC router channel interface (much lower than the either the cray channel interface or the NSC-to-NSC box interface)

part of the difference was that the standard vendor box (8232) wasn't a tcp/ip rounter ... but a lan bridge. the mainframe tcp/ip code had to do the tcp/ip protocol to mac translation. for the rfc1044 support, I just had to exchange tcp/ip packets with the NSC router. that and various other factors resulted in rfc1044 support having ratio of mbytes transferred to mips executed showing nearly three orders of magnitude improvement.

i once worked on an early lan crypto hardware that was suppose to sustain media speed with minimum sized packets and support switching keys on every packet ... now that got interesting.

Channel Distances

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: alt.folklore.computers
Date: Thu, 22 Dec 2005 13:44:50 -0700
another trivial comparison was the special 4mbit t/r (16bit bus) card done for the pc/rt and the 16mbit t/r (32bit microchannel) card used by the rs/6000.

the 4mbit t/r card had been specially designed pc/rt card for maximum thruput.

the 16mbit t/r card used by the rs/6000 was the same as what was used for ps2. the ps2 lan environment had design point of SAA and terminal emulation paradigm (trying to stick the client/server genie back into the bottle)
http://www.garlic.com/~lynn/subnetwork.html#emulation

it turned out that the pc/rt 4mbit t/r card had higher per card thruput than the 16mbit t/r card used by the rs/6000.

another comparison was that the new almaden research building had been wired for cat5 (16mbit t/r) ... however, when they went to deploy ... they found that 10mbit star-wired enet (over cat5) had higher per card thruput, higher aggregate network thruput, and lower latency ... than running the same wiring with 16mbit t/r.

it was during this period that we came up with 3tier architecture, middle layer, etc
http://www.garlic.com/~lynn/subnetwork.html#3tier

and out pitching it in customer executive presentations. it included lots of enet content ... and we were taking hits from the t/r factions (i.e. because of enet content) and saa factions (because we were not only not terminal emulation paradigm ... but moving past client/server to 3tier).

misc. 801, romp, pc/rt, rios, power, rs/6000, fort knox, etc ... postings.
http://www.garlic.com/~lynn/subtopic.html#801

disclaimer ... anne is named on an early token-passing (lan) patent

previous parts of this thread:
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances
http://www.garlic.com/~lynn/2005u.html#23 Channel Distances
http://www.garlic.com/~lynn/2005u.html#46 Channel Distances
http://www.garlic.com/~lynn/2005u.html#49 Channel Distances

Channel Distances

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Channel Distances
Newsgroups: alt.folklore.computers
Date: Thu, 22 Dec 2005 16:12:49 -0700
David Dyer-Bennet <dd-b@dd-b.net> writes:
I'm not clear who "he" refers to; it doesn't seem to be to anybody referenced in the web link given either.

I remember the security router being Jim Hughes and Ken Hardwick's baby, and I'm pretty sure that was before Ken took to working remotely.


the processor in the security router had enuf extra horse power to do quite a bit of things ... it could handle a lot of packet filtering ... something that some of the other products in the market-place didn't have a lot of spare cycles to do (at least at the time).

in the mid-90s, i was strongly advocating isp's do ingress filtering ... getting all sorts of push back ... and there were all sort of reasons why it wasn't possible (a large part having to do with the capability of the installed products).

btw, can you say HaystackLabs or Wheelgroup??

totally unrelated, Jim and I are co-chair of virtualization and security track at a chip conference this summer (CFP hasn't gone out yet). for total off the wall topic drift:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

and slightly related x-post in n.g. yesterday
http://www.garlic.com/~lynn/2005u.html#47

from my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

summary for rfc2267
http://www.garlic.com/~lynn/rfcidx7.htm#2267
2267
Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing, Ferguson P., Senie D., 1998/01/23 (10pp) (.txt=21032) (Obsoleted by 2827) (Refs 1812, 1918, 2002) (Ref'ed By 2344, 2644, 2893, 3024, 3142, 3178)


summary for rfc2827
http://www.garlic.com/~lynn/rfcidx9.htm#2827
2827
Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing, Ferguson P., Senie D., 2000/05/16 (10pp) (.txt=21258) (BCP-38) (Updated by 3704) (Obsoletes 2267) (Refs 1812, 1918, 2002, 2344) (Ref'ed By 3013, 3220, 3344, 3489, 3697, 3704, 3775, 3871, 3964, 4140, 4174, 4192, 4213, 4218)


and and with regard to earlier post, rfc1044 summary
http://www.garlic.com/~lynn/rfcidx3.htm#1044
1044 S
Internet Protocol on Network System's HYPERchannel: Protocol specification, Hardwick K., Lekashman J., 1988/02/01 (43pp) (.txt=100836) (STD-45) (Refs 826) (Ref'ed By 2626) (IP-HC)


as always in my rfc index, clicking on other RFC numbers takes you to the summary for that RFC, clicking on the ".txt=nnnn" in a RFC summary retrieves the actual RFC.

as an aside, there is a keyword index for RFCs ... as well as a reverse keyword index (i.e. all keyword index entries that reference a particular RFC) ... if you click on the RFC number associated with a particular RFC summary, that takes you to the reverse keyword index. and, of course, in the reverse keyword index, if you click on any keyword, that takes you to that specific keyword index.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

OSI model and an interview

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI model and an interview
Newsgroups: comp.protocols.tcp-ip
Date: Thu, 22 Dec 2005 21:07:13 -0700
"Raines" writes:
I know what the 7 layers are and roughly know what each layer does.

Never have I had a practical need for this information....can someone tell me what I might be asked in a job interview?

More importantly..why is it important? Why do I care what the 7 layers are if the information is of no practical use to me?


one of the things was that ISO had this operation that ISO and ISO chartered organizations could do standards for things that violated the OSI model. we tried to do HSP (high-speed protocol) with ansi x3s3.3 (us chartered iso standards group) ... they couldn't do it because:

1) HSP went directly from the transport interface to the LAN/MAC interface bypassing the layer3/4 (network/transport) interface violating OSI model

2) HSP supported internetworking protocol (IP). IP violates the OSI model; IP doesn't exist in the OSI model ... sitting between networking layer and transport layer.

3) HSP supported LAN/MAC interface. LAN/MAC violates OSI omdel ... with the LAN/MAC interface sitting somewhere in the middle of OSI network layer (layer 3).

misc. other refs:
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

it is somewhat telco copperwire point-to-point protocol from the 70s ... not taken into account the internconnecting of networks, local area networks, and/or various high speed issues that showed up more and more in ths 80s.

however, some number of places had mandates to eliminate tcp/ip and go with OSI/ISO into the early 90s (like the federal gov. and gosip).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

OSI model and an interview

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI model and an interview
Newsgroups: comp.protocols.tcp-ip
Date: Fri, 23 Dec 2005 07:51:16 -0700
Dom writes:
Disjointed as usual, Anne & Lynn Wheeler. The dude wants to talk about the OSI model in the context of a job interview and you propose he talk about the following. Will go over well, I'm sure.

OSI was something that reflected 60s/70s, telco, point-to-point, copper-wire, high error rate, low latency, single organization.

ISO organizational compounded the problem into the 90s by not allowing standards work on anything that violated the iso model ... like bypassing interfaces, supporting internworking (tcp/ip), or supporting lans/macs (environment osi was trying to described far predated concept of lans/macs). small disclaimer, my wife is named on early patent for token-passing lan.

it was further compounded by several gov. agencies (including fed. gov) mandating that tcp/ip be eliminated, being replaced by iso/osi protocols.

the rest is the kind of stuff you are going to likely get in college courses and books.

we did our own high-speed backbone in the 80s ... but weren't allowed to bid on nsfnet1 (t1) or nsfnet2 (t3) backbone (precursor to current operational internet).

we did get an audit by NSF that concluded what we had operational was at least five years ahead of all bid submissions to build something new for nsfnet (actually some of which may appear in the new internet2, nearly 20 years later).

the two of us were also the red-team for the group winning nsfnet2 bid. the blue team was composed of large group of people from seven locations around the world. i presented first in the final review. five minutes into the blue team presentation, the executive running the review got up and started pounding on the table, exclaiming that he would lay down in front of a garbage truck before he allowed any but the blue team solution to go forward (i.e. five minutes into the blue team presentation, it was apparent to everybody present that the red team solution was far superior).

one of the things that i frequently find interesting is questions about osi in tcp/ip n.g. in the light of the fact that OSI was mandated by several organizations to kill off tcp/ip.

from my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

misc. archeological and historical erferences
http://www.garlic.com/~lynn/rfcietf.htm#history

NSFNET backbone Announcement and Award
http://www.garlic.com/~lynn/internet.htm#nsfnet

old NSFNET program announced:
http://www.garlic.com/~lynn/2002k.html#12

old reference to NSFNET program award
http://www.garlic.com/~lynn/2000e.html#10

some number of past postings referencing GOSIP & GOSIPv2 (aka U. S. Government Open Systems Interconnection Profile) mandated to replace tcp/ip
http://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
http://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
http://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
http://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
http://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
http://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
http://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
http://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
http://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
http://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
http://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
http://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
http://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
http://www.garlic.com/~lynn/2002m.html#59 The next big things that weren't
http://www.garlic.com/~lynn/2002n.html#42 Help! Good protocol for national ID card?
http://www.garlic.com/~lynn/2003e.html#71 GOSIP
http://www.garlic.com/~lynn/2003e.html#72 GOSIP
http://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
http://www.garlic.com/~lynn/2004c.html#52 Detecting when FIN has arrived
http://www.garlic.com/~lynn/2004e.html#13 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
http://www.garlic.com/~lynn/2004q.html#44 How many layers does TCP/IP architecture really have ?
http://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
http://www.garlic.com/~lynn/2005.html#29 Network databases
http://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc
http://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
http://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

OS/2 RIP

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS/2 RIP
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 23 Dec 2005 15:51:25 -0700
Jack Woehr writes:
So it can't ALL be blamed on M$, sons-of-others-than-their-fathers that they are. Which I said on VNET in 1994 and almost got fired from IBM for saying so!

there used to a joke that os2 was the os360/mft system people had moved to boca and were trying to re-invent mft ... first as rps on series/1 and then os2 on ibm/pc.

note that we ran into lots of trouble when we first came up with 3-tier architecture and were out pitching it to customer execs. this was back in the days of saa ... which could be construed as attempting to put the client/server genie back into the bottle. somewhat related recent postings
http://www.garlic.com/~lynn/2005u.html#50 Channel Distances
http://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview

misc. posts about 3-tier architecture, saa, middleware, etc
http://www.garlic.com/~lynn/subnetwork.html#3tier

misc. posts about terminal emulation paradigm
http://www.garlic.com/~lynn/subnetwork.html#emulation

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

OSI model and an interview

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI model and an interview
Newsgroups: comp.protocols.tcp-ip
Date: Fri, 23 Dec 2005 23:22:24 -0700
roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
When asked about a historical model that is no longer in common use, is it inappropriate to talk about the history of the model, the problems that it was designed to solve, the reasons that the model did not succeed "in the marketplace", or the differences between the historical model and what is actually practiced now?

note that both internet and lans/mac violate the OSI model ... and it would hard to imagine most modern networking w/o either.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

OSI model and an interview

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI model and an interview
Newsgroups: comp.protocols.tcp-ip
Date: Sat, 24 Dec 2005 08:18:54 -0700
roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
When asked about a historical model that is no longer in common use, is it inappropriate to talk about the history of the model, the problems that it was designed to solve, the reasons that the model did not succeed "in the marketplace", or the differences between the historical model and what is actually practiced now?

much more polite history of OSI
http://www.tcpipguide.com/free/t_HistoryoftheOSIReferenceModel.htm

note that at about the time of OSI adoption ... there were both LANs coming on to the scene (my wife's name is on international token-passing patent from a couple years earlier) and the great switch-over to tcp/ip (from host) protocol on 1/1/83 (both violating the OSI implementation standard).

in the late 80s, there was some published results of actual OSI implementations that showed performance was such that it was actually quite infeasable for any real-world deployments. that helped contribute to various factions transitioning OSI from an implementation standard to introductory networking eductional material (as mentioned in the above article) ... even tho various govs and other organizations continued well into the 90s to mandate OSI (implementation) adoption.

long standing comparison of iso/ccitt vis-a-vis IETF is that ISO allows standards to be passed and adopted if enuf people get together and vote for them (whether they may be practical in any real-world sense or not). In contrast, IETF has had at least long-standing process (for protocol progress thru the standards stages) of two different interoperable implementations ... doesn't absolutely guarantee any real-world practicality ... but is a slightly better than having no implementation feasability criteria at all.

as additional digression, i've claimed that one reason that the internal network was larger than the whole arpanet/internet (from just about the beginning until sometime summer '85)
http://www.garlic.com/~lynn/subnetwork.html#internalnet

was that the mainstay of internal network nodes had gateway-like capability from the beginning. the arpanet/internet didn't get compareable gateway capability until the switch-over to internetworking protocol on 1/1/83. in that sense, the arpanet implementation was much more akin to OSI implementation standard up until the 1/1/83 switch-over to internetworking protocol ... which happened in the same time-frame that the OSI implementation standard was adopted (by the time OSI implementation standard was adopted, it was already obsolete).

switch-over to internetworking protocol with gateway capability on 1/1/83 placed the internet and the internal network on much more of a level playing field ... which helped with a large explosion in the number of internet nodes (passing the number of internal network nodes sometime mid-85). another factor in the explosive growth in the number was tcp/ip capability allowing workstations and PCs to be nodes. in contrast, their were significant forces internally attempting to restrict workstations and PCs to terminal emulation.

some number of past posts about battles and other issues over terminal emulation paradigm
http://www.garlic.com/~lynn/subnetwork.html#emulation

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

IPCS Standard Print Service

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IPCS Standard Print Service
Newsgroups: bit.listserv.ibm-main
Date: 24 Dec 2005 08:01:07 -0800
Todd Burch wrote:
I've written some routines (in assembler) to access IPCS dump data using the IPCS Customization Services. I had written the same routines before in REXX, but slowness of REXX made the pain too great to bear any more. So, now I am a happy camper with my very high speed assembler routines. What was taking 3-4 minutes in REXX to process now takes sub-seconds with assembler.

can you say dumprx
http://www.garlic.com/~lynn/submain.html#dumprx

very early, when it was still rex ... and long before it was released as a product ... I wanted to do a demonstration that rex was more than just another command scripting language (ala exec/exec2).

ipcs was something like 20k(?) lines of assembler and had a whole dept. supporting it. i wanted to demonstrated that in half-time over a period of 3-months ... i could completely re-implement ipcs from scratch in rex .. with ten times the function and running ten times faster (there was some slight of hand ... plus 120 instructions written in assembler).

for what-ever reason, it was never released ... but at one point was used by nearly all internal locations as well as customer support and PSRs.

this was the middle of the OCO-wars ... and one of the side points ... was that if it would ever be release ... the full source would still have to ship (even if other products made the transition to object-code only).

i was finally allowed to make presentations at share and other user group meetings ... hoping to encourage other people to duplicate the effort.

Command reference for VM/370 CMS Editor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Command reference for VM/370 CMS Editor
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 24 Dec 2005 12:58:28 -0700
David Boyes writes:
I dimly remember that EDIT was somehow related to STOPGAP and SOS (Son of STOPGAP) on CTSS (and later TOPS). You might google for STOPGAP or SOS as a starting point. I might have a late TOPS-10 SOS doc somewhere.

i've got old cp67/cms cmd reference and user guide (actually a couple, one typeset and a couple that were offset printed from 1403 print master) that has the edit command description and all its subcommands ... this was still 2741/1052 and after i had added tty/ascii (aka all "line" edit). later cms edit got a fullscreen display mode ... but display was r/o and effectively still line-mode editing (although input mode allowed you to have the full screen for new input).

in my pile of reference card (underneath the stack of green & yellow 360 & 370 cards), i found
vm/system product: SP Editor command language reference summary, sx24-5122-0, 1st edition, july 1980 (aka xedit)

also four yellow rex reference summary cards (before release as product), first edition, nov. 1980, rex resion 2.08

dark green exec 2, A computer language for word programmg (before release, ref. research report RC7268)

vm/sp exec 2 language reference summary, sx24-5124-1 (june 82)

one orange vmshare users guide, jan. 1980

(white) script/370 version 3 quick guide for users, gx20-1997,

(green) script/370 quick guide for users references summary, gx20-1969

(yellow) reference summary, vnet commands (user and operator), gz20-2008, april 1977

(blue) summary of PRY requests, M.H. TJ watson, oct. 78

various yellow DCF cards; sx26-3723-1/jul78, two SX26-3719-1/mar80, two SX26-3723=2/Mar80,


...

has anybody checked bitsaver?
http://www.bitsavers.org/pdf/ibm/370/

cms command reference and cms user guide:
http://www.bitsavers.org/pdf/ibm/370/GC20-1818-0_cmsCmdRef_Mar76.pdf
http://www.bitsavers.org/pdf/ibm/370/GC20-1819-0_cmsUG_Feb76.pdf

just for the fun of it they also have
http://www.bitsavers.org/pdf/ibm/3270/GA27-2749-5_3270descr_Nov75.pdf

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Command reference for VM/370 CMS Editor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Command reference for VM/370 CMS Editor
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 24 Dec 2005 13:15:15 -0700
and in a separate stack of reference cards, i found
vm370 commands (general user), gx20-1961-4, july 1979

it has two panels listing: "The EDIT subcommands and macro instructions" ... but just lists the subqcommands and arguments, doesn't actually give a description (but you should be able to find that information in the cms command reference and user guide manuals on bitsaver)

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

1970s data comms (UK)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1970s data comms (UK)
Newsgroups: alt.folklore.computers
Date: Mon, 26 Dec 2005 12:45:06 -0700
Brian W Spoor writes:
It makes for some nostalgic reading, especially the Datel 200 service which was rated at 200bps (dial-up) and recently extended to allow 300bps for new faster terminals; although the faster speed could not be guaranteed over all circuits.

Before the telephone services were privatised as British Telecom (BT), they were run by the Post Office, a government organisation.

Nothing much changes, BT still will not guarantee data (modem) transmission over standard voice lines. It doesn't work very well, if they have put a 'line sharing' box on your line to enable 2 'lines' to share one pair of wires.


one of the real hard problems the internal network had in the 70s and thru much of the 80s ... was getting gov. approval from eu govs. and PTTs for link encryptors on the internal network links. supposedly encryption wasn't allowed on telco circuits crossing country boundaries ... even when the circuits were being two offices of the same corporation. this was very reluctantly relaxed for telco circuit going between two offices of the same corporation.

internal network was larger than the arpanet/internet from just about the beginning to possibly sometime mid-85.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

DMV systems?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DMV systems?
Date: Mon, 26 Dec 2005 23:00:52 -0700
Newsgroups: bit.listserv.ibm-main
giltjr@ibm-main.lst (John S. Giltner, Jr.) writes:
I'm not sure, but I beleive that HP-UX and "Solaris" (it was originally called SunOS) came out in early 80's (82'ish) and that AIX did not come out until a few years later (86'ish). DMV systems were already in place by then.

a minor sunos reference (this was a bsd unix base)
http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party

the original "risc" aix was an 801 chip project that started out as a joint research/office products division effort to use the 801 "romp" in a system for a displaywriter followon product. misc. past posts on 801, romp, rios, fort knox, etc
http://www.garlic.com/~lynn/subtopic.html#801

when the office products displaywriter followon got killed, it was decided to retarget the product to the unix technical workstation market. the company that had been hired to do the pc/ix port for the ibm/pc was hired to do a port to romp (or almost romp, they ported to something called VRM, an abstract virtual interface which ran on romp). this was called aixv2 (this was an att unix base) and the machine was announced as pc/rt.

for rs/6000 (rios), aixv2 was enhanced and the vrm was eliminated. rs/6000 had desktop, deskside towers, and rack mounted configurations. however, much of the marketing effort was still targeted at the technical workstation markett place.

my wife and I mounted an effort to address the commercial and business critical marketplace ... with ha/cmp ... which was oriignally targeted to provide both availability and scaleup for business applications. minor reference
http://www.garlic.com/~lynn/95.html#13
various collected postings
http://www.garlic.com/~lynn/subtopic.html#hacmp

besides traditional mainframe commercial applications we also spent some time marketing to trandem and stratus customers.

somewhat in the late 80s there was also aix/370 and aix/ps2 ... which were built on UCLA's locus/unix platform. the same group had also done "aos" for the pc/rt ... which used a bsd unix base.

some solaris (and sun/os) history
http://www.softpanorama.org/Solaris/solaris_history.shtml

from above:
The Sun 1 was shipped with Unisoft V7 UNIX. When Bill Joy, one of the main programmers of the Berkeley Software Distribution (BSD), helped found Sun in 1982, he brought with him the elements for the first release of SunOS. Later in 1982 Sun provided a customized 4.1BSD UNIX called SunOS as an operating system for its workstations. Up through version 4.1.x (Solaris 1.x), SunOS remained a heavily BSD-influenced Unix implementation.

In the late '80s, Sun entered into a partnership with AT&T, which was then developing the other major Unix flavor, System V. The result was System V release 4 (SVR4), which incorporated BSD as well as SunOS extensions (e.g., NFS). Subsequently, with its version 5.x (Solaris 2.x) releases, SunOS shifted from its BSD origins to SVR4.

For more information about SunOS and Solaris, including FAQs, white papers, upgrade, and purchasing information, visit Sun's Solaris Web page


... snip ...

note that somewhat in reaction to suns partnership with att, the other vendors (dec, hp, ibm, etc) banded together for osf ... and produced osf/1, dce and a few other things.

a few unix related history refs from around the web:
http://www.users.csbsju.edu/~jgramke/Help/unix/unix/data/history.html
http://www.faqs.org/faqs/unix-faq/faq/part6/section-3.html
http://www.dsps.net/History.html
http://www.columbia.edu/cu/computinghistory/
http://www.uwsg.iu.edu/usail/external/recommended/unixhx.html
http://www.ee.ic.ac.uk/docs/software/unix/begin/appendix/history.html
http://www.unix.org/what_is_unix/history_timeline.html
http://www.robotwisdom.com/linux/timeline.html

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/


previous, next, index - home