List of Archived Posts

2001 Newsgroup Postings (06/27 - 07/22)

FREE X.509 Certificates
distributed authentication
Root certificates
distributed authentication
Extended memory error recovery
New IBM history book out
New IBM history book out
New IBM history book out
Test and Set (TS) vs Compare and Swap (CS)
Test and Set (TS) vs Compare and Swap (CS)
Root certificates
FREE X.509 Certificates
FREE X.509 Certificates
Apple/PowerPC rumors
Public key newbie question
Extended memory error recovery
Root certificates
Root certificates
VPN solution for school district
Root certificates
Golden Era of Compilers
Root certificates
Golden Era of Compilers
IA64 Rocks My World
XML: No More CICS?
Root certificates
distributed authentication
Golden Era of Compilers
Cray at Apple (was Re: Compaq confirms Intel Alpha spin-off)
any 70's era supercomputers that ran as slow as today's supercomputers?
Did AT&T offer Unix to Digital Equipment in the 70s?
Root certificates
Did AT&T offer Unix to Digital Equipment in the 70s?
Did AT&T offer Unix to Digital Equipment in the 70s?
Did AT&T offer Unix to Digital Equipment in the 70s?
Did AT&T offer Unix to Digital Equipment in the 70s?
What was object oriented in iAPX432?
Thread drift: Coyote Union (or Coyote Ugly?)
distributed authentication
X.25
Self-Signed Certificate
IA64 Rocks My World
X.25
The Alpha/IA64 Hybrid
The Alpha/IA64 Hybrid
Did AT&T offer Unix to Digital Equipment in the 70s?
The Alpha/IA64 Hybrid
The Alpha/IA64 Hybrid
The Alpha/IA64 Hybrid
Did AT&T offer Unix to Digital Equipment in the 70s?
Did AT&T offer Unix to Digital Equipment in the 70s?
Did AT&T offer Unix to Digital Equipment in the 70s?
Compaq kills Alpha
S/370 PC board
DSRunoff; was Re: TECO Critique
Using a self-signed certificate on a private network
YKYBHTLW....
Q: Internet banking
TECO Critique
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
PKI/Digital signature doesn't work
[OT] Root Beer (was YKYBHTLW....)
Installing Fortran
PKI/Digital signature doesn't work

FREE X.509 Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FREE X.509 Certificates
Newsgroups: comp.security.firewalls,comp.security.ssh,comp.lang.java.security,alt.security.announce
Date: Wed, 27 Jun 2001 14:52:59 GMT
russfink@hotmail.com (Russ Fink) writes:
I think I understand your point; however, the intended security of the public/private keypair should theoretically enable new kinds of services or solutions architectures that can make use of this added trust. For example, the DoD is currently rolling out a PKI at an assurance level that allows certain forms of transactions (information requests, authorizations) to occur on minimally protected networks. Soon, their infrastructure will allow high value information to traverse unprotected networks.

... oh yes, I was on PKI panel at
https://csrc.nist.gov/publications/detail/conference-paper/1998/10/08/proceedings-of-the-21st-nissc-1998

pointer to copy of my part of the talk:
http://lists.commerce.net/archives/ansi-epay/199810/msg00006.html
https://web.archive.org/web/20020223114822/http://lists.commerce.net/archives/ansi-epay/199810/msg00006.html
((moved to https://www.garlic.com/~lynn/nissc21.zip))

one of the first panelists starting their talk by saying something about everybody having heard about how hard and difficult PKIs are, well, they really aren't that bad.

the last panelist starting out by saying they have been responsible for the largest and longest running PKI and everybody has heard about how hard and difficult PKIs are, well it is really much, much worse.

X9.59 supports abritrarly large financial transactions over all kinds of networks, protected, unprotect, etc.

The charter given the X9A10 working group for the X9.59 standard was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions (w/o regard to origin of the transaction and/or over what kind and/or number of networks the transaction might flow).

One of the important issues was to achieve this level of integrity without having to hide information. In many of the current schemes, information is hidden/encrypted, but there are periodic points where that information has to be revealed/decrypted. These points have become exploit points (as can be seen in all the news about exploits associated with account number harvesting). Achieving the X9.59 level of integrity w/o having to hide/encrypt data eliminates the points of exploit for things like account number harvesting (allow the information to potentially be extracted at all points, and still not be able to have fraudulent transactions).

random refs:
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

distributed authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: distributed authentication
Newsgroups: sci.crypt
Date: Wed, 27 Jun 2001 19:55:00 GMT
wahern@25thandClement.com (William Ahern) writes:
i'm looking for information on distributed indentity authentication (and also access authorization, but not as important right now, unless those two are necessarily dependent on each other).

i'm interested in mechanisms for flat (non-hierarchical) models that provide authenication of an indentity. and, more importantly (as I can find some of these on the internet) critiques/attacks of any of those models (i.e. like how trust relationships could be poisoned, etc).

can anyone give me directions/pointers... ?

a more practical note: i'm looking for a system that could be used in place of something like microsoft's passport service... so that the only dependency is on the network (and/or some critical characteristic of its makeup)


something like 99.99999% of the current internet "client" authentication operations probably occur with RADIUS using userid/password or in some cases challenge/response.

it is relatively straight-forward process to upgrade RADIUS to also support public key authentication while preserving the existing administrative and business process infrastructures and operations.

some discussion of RADIUS upgrade for individual public key authentication:
https://www.garlic.com/~lynn/subpubkey.html#radius

references to RADIUS related standards can be found by following

https://www.garlic.com/~lynn/rfcietff.htm

and selecting term (term->RFC#)

and then in the Acronym fastpath selecting "RADIUS"

that will give you all of the current RADIUS related RFCs ... you can also follow the "authentication" pointer to all authentication related RFCs.

somewhat related discussions regarding the relm of identity, authentication and privacy (i.e. blasting identity information all over the world can come into conflict with privacy regulations and privacy guidelines)
https://www.garlic.com/~lynn/subpubkey.html#privacy

misc. discussions on 3-factor authentication
https://www.garlic.com/~lynn/aadsmore.htm#schneier Schneier: Why Digital Signatures are not Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000f.html#65 Cryptogram Newsletter is off the wall?
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#19 [Newbie] Authentication vs. Authorisation?

slightly related discussion regarding server authentication:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

if you are into security ... glossary & taxonomy (including authentication, access control, identity, etc terms)
https://www.garlic.com/~lynn/secure.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Wed, 27 Jun 2001 21:46:55 GMT
"Paul D. J. Vandenberg" writes:
The real issue is whether you can trust the certificate, and that's very different than asking whether the certificate is secure. By their nature, certificates are designed to be shared with the whole world, so secure is not an issue. Trustworthy is.

Do you believe the company is who it says it is? If so, then there's no problem. But, if you are not convinced the company is legit because they chose not to pay the (rather stiff) fee for a server certificate from one of the commercial services, or because you believe they couldn't pass the certificate issuers' screening process, then don't trust them.

Regardless of whether you trust the certificate, any SSL session with the server will be as secure as a session with a server using a commercially issued certificate -- assuming, that is, that they're using a reliable server with crypto support that's correctly implemented.

Paul V.


actually there is an interesting characteristic about SSL domain name server certificates.

Basically ... SSL checks to see if the domain name you used for the URL is the same as the domain name listed in the certificate.

The supposed theory behind all this is that there is some question regarding the integrity of the domain name infrastructure as part of resolving domain name to IP-address.

However, note that the authoritative agency for who owns a domain name is the domain name infrastructure. When a SSL domain name server certificate is applied for, the certification authority needs to contact the authoritative agency with respect to the information that it is certifying ... which happens to be the same domain name infrastructure that supposedly has integrity issues and for which SSL domain name server certificates are a solution for.

There has been a proposal to improve the domain name infrastructure integrity (so that certification authorities as part of contacting the authority agency for the information being certified can better trust it), which involves the entity registering their domain name to also register their public key (in the online, real-time domain name infrastructure database).

The obvious issue here is that by improving the integrity of the domain name infrastructure, not only is it improved for use by the certification authorities (wishing to issue certified ssl domain name server certificates), but the integrity is improved for everyone (somewhat mitigating the justification of having SSL domain name server certificates in the first place).

Furthermore, the certification authorities solution for improving the integrity of the domain name infrastructure (i.e. registration of the entity's public key in an online, real-time domain name infrastructure database) would allow the domain name infrastructure to not only server up real-time, trusted IP-addresses but also to server up real-time, trusted public keys (w/o reguiring a certificate). The standard domain name infrastructure already serves up various information in addition to real-time IP-addresses, so being able to also serve up real-time, trusted public keys is straight-forward capability within the current infrastructure.

Now, what does it do for the SSL protocol. There already is a bunch of chatter involving relatively large certificate objects. This is in addition to the fact that the current infrastructure isn't a "REAL" PKI ... but purely a certificate manufacturing and distribution mechanism. All the questions about unimplemented PKI features like following the trust chain, management of revoked certificates, etc. all goes away. The SSL protocol could simply get a real-time, trusted public key at the same time it acquires real-time, trusted IP-address.

misc. related discussion
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

distributed authentication

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: distributed authentication
Newsgroups: sci.crypt
Date: Wed, 27 Jun 2001 22:10:06 GMT
Anne & Lynn Wheeler writes:
if you are into security ... glossary & taxonomy (including authentication, access control, identity, etc terms)
https://www.garlic.com/~lynn/secure.htm


oops, finger/brain check ...

https://www.garlic.com/~lynn/secure.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Extended memory error recovery

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Extended memory error recovery
Newsgroups: comp.arch
Date: Thu, 28 Jun 2001 00:17:06 GMT
at150bogomips@aol.com (At150bogomips) writes:
After reading Sun's (oldish) white paper on RAS ("Ultra(TM)Enterprise(TM) 10000 Server: SunTrust(TM) Reliability, Availability, and Serviceability"(1997)), I wondered if any (more recent) systems have better recovery from memory errors. E.g., Sun's white paper did not mention the possibility of the OS treating an instruction memory error as a page fault (presumably the code is retained on disk). The paper also did not mention the possibility of an uncorrectable (by ECC hardwre) memory error in cache being recoverable if the cache block is shared. It was also not clear if a cache error would go down to main memory if the block was not dirty (this assumes that the state bits are available).

there has been a discussion recently about an issue in the early '70s with compare&swap and multiprocessor operationg with regard to the 370 instruction retry (i.e. there was very extensive RAS infrastructure that extended to fairly complex instruction error retry as part of extensive error recovery scenerios). While these systems may have also had better recovery from memory errors, they are hardly more recent (being 30 years old).

part of the thread
https://www.garlic.com/~lynn/2001f.html#41 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#61 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#69 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#70 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#73 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#74 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#76 Test and Set (TS) vs Compare and Swap (CS)

misc. other ref:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

New IBM history book out

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New IBM history book out
Newsgroups: alt.folklore.computers
Date: Thu, 28 Jun 2001 14:36:53 GMT
lwinson@bbs.cpcn.com (lwin) writes:
A former executive of IBM, its first women vice president, has written a book about her experiences. It touches on the 1930s, 1940s, and 1950s.

The book is titled "Among Equals", ISBN # 0887392199, by Ruth Leach Amonette. Avail in both hardbound and paperback.

While not a formal substantive history, the book is a pleasant look at life in that era, and what it was like working for Thomas J. Watson, Sr. as he expanded IBM. Some of it parallels "Father, Son & Co."

For those interested in IBM and social conditions of that era, I would recommend the book.


note quite so old ... but found an online copy of the "IBM JARGON" file (from early 80s)

http://www.212.net/business/jargon.htm
https://web.archive.org/web/20020601123619/http://www.212.net/business/jargon.htm

besides getting blamed for originating the 360 PCM controller business

https://www.garlic.com/~lynn/submain.html#360pcm

I also got a lot of blame for Tandem Memos, see entry in

http://www.212.net/business/jargont.htm
https://web.archive.org/web/20020601123619/http://www.212.net/business/jargont.htm

when somebody mailed off copies to all members of the executive committee in tandem 3-ring binders.

Not in that particular edition of the jargon file was the definition of "auditors" (i.e. the guys that go around the battlefield after a war stabbing the wounded). When SJR/028 first deployed 6670s (basically ibm copier-3 able to print on both sides of page, but with computer driven interface), the "separator" page function (with originating person) had an added function to randomly pick some entry from the online jargon file and print it on the separator page (i.e. the user information only occupied about 10% of the page so the rest was blank).

That led to an unfortunate incident when there was a security audit, and some auditors checking 6670 printer rooms around the building found output with the definition of auditors printed on it and assumed that it was directed specifically at them.

some random 6670 refs (specifically 99.html#52)
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000d.html#81 Coloured IBM DASD
https://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

New IBM history book out

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New IBM history book out
Newsgroups: alt.folklore.computers
Date: Thu, 28 Jun 2001 15:20:20 GMT
Anne & Lynn Wheeler writes:
I also got a lot of blame for Tandem Memos, see entry in

http://www.212.net/business/jargont.htm
https://web.archive.org/web/20020601123619/http://www.212.net/business/jargont.htm


the above version is little dated ... since a slightly later version had the last sentence added.
Tandem Memos n. Something constructive but hard to control; a fresh of breath air (sic). "That's another Tandem Memos." A phrase to worry middle management. It refers to the computer-based conference (wide- ly distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

New IBM history book out

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New IBM history book out
Newsgroups: alt.folklore.computers
Date: Thu, 28 Jun 2001 16:01:35 GMT
Anne & Lynn Wheeler writes:
I also got blamed for Tandem Memos, see entry in


http://www.212.net/business/jargont.htm
https://web.archive.org/web/20020601123619/http://www.212.net/business/jargont.htm


slightly related entry
[MIP envy] n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Thu, 28 Jun 2001 17:26:08 GMT
Wild Bill writes:
This is all true and probably works according to spec TODAY. But in the not so distant past (I last knew about it in 1996/97), there were documented instances where it was not honored. The way it workED was if you put two lock words adjacent, or even within a cache line, to each other, and you did simultaneous (within nanoseconds of each other) CS's by two processors each going against the other word, at some point it would fail with no indication or retry attempted. We had it nailed to a particular vendor and system model, even though we were not allowed to ask that question or point fingers.

charlie discovered a hardware problem on 360/67 that never did get fixed (Charlie invented CS ... and in fact, CS are his initials, coming up with mnemonic for his initials was a couple month effort, also the POK owners of POP required that a CS use in non-SMP world be invented before it would be added, which gave rise to the invention for thread-safe operation for non-disabled, multithreaded code that original showed up in the CS programming notes).

turns out on the 67 there was only a single STO-stack ... so any time you changed the STO-pointer control register (it was CR0 on 67, it moved to CR1 for 370 relocate, in part because pre-relocate 370 had already taken CR0 for flag bits) everything in the hardware relocate "look-aside" was invalidated. On 370s, some models supported multiple STOs, 168 had 3bit "id" on every entry in the look-aside buffer, allowing each entry to be associated with one of seven STOs or invalid).

The hardware bug charlie uncovered on the 67 was when interrupting from relocate mode into non-relocate mode (i.e. new psw interrupt w/o relocate flag) cleared all the entries in the look-aside buffer to zero, but didn't turn on the invalid flag. Normal code would reload CR0 prior to re-entering relocate mode (which would flush/invalidate all current entries) ... but one effort by Charlie to reduce timings in the kernel was to check if the address space had not changed, don't reload CR0 ... in theory, saving all the existing entries in the hardware look-aside. That was when things went south ... since all entries in the hardware look-aside now "valid" but pointed to real-page zero.

Dumps wouldn't show it up ... so there was no way of dumping the values in the hardware look-aside buffer (and all the page tables look completely valid).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Test and Set (TS) vs Compare and Swap (CS)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set (TS) vs Compare and Swap (CS)
Newsgroups: comp.lang.asm370
Date: Thu, 28 Jun 2001 17:34:34 GMT
Anne & Lynn Wheeler writes:
charlie discovered a hardware problem on 360/67 that never did get fixed (Charlie invented CS ... and in fact, CS are his initials, coming up with mnemonic for his initials was a couple month effort, also the POK owners of POP required that a CS use in non-SMP world be invented before it would be added, which gave rise to the invention for thread-safe operation for non-disabled, multithreaded code that original showed up in the CS programming notes).

actually charlie's initials are CAS ... which is where Compare And Swap comes from ... mnemonic got shortened to CS for CDS (as per early posting in this thread)

https://www.garlic.com/~lynn/2001f.html#41

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Thu, 28 Jun 2001 18:01:03 GMT
Jim Watt writes:
The issue is not the security that the certificate confers on the transaction, but whether the company that owns it is genuine.

One hopes that Verisign take precautions to ensure that someone they issue a certificate is genuinly who they claim to be (like Microsoft)

The fact its signed by a recognised certicifate issuer should indicate that they have established the identity, otherwise it could be anyone you are giving your money to.


as pointed out in my other posting ... the only thing that really happens with web server certificates ... is that the domain name specified in the URL for the HTTPS/SSL is the same as the domain name in the certificate.

certification authorities effecitvely have to rely on the validity of the information by checking with the authoritative agencies that are responsible for domain name ownership ... aka the domain name infrastructure.

the interesting thing is that the supposed purpose of having such a certificate in the first place is questions regarding the integrity of the domain name infrastructure ... which the certification authorities have to rely on for the certification of domain name ownership information (however it is a very convoluted, complex, obscure trust trail which allows consumers to feel more comfortable because it is convoluted, complex and obscure compared to the consumer simply directly dealing with domain name infrastructure).

random refs:
https://www.garlic.com/~lynn/2001g.html#2
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FREE X.509 Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FREE X.509 Certificates
Newsgroups: comp.security.firewalls,comp.security.ssh,comp.lang.java.security,alt.security.announce
Date: Thu, 28 Jun 2001 23:19:32 GMT
"VK" writes:
My friends, You all are missing the real problem of the current stage of e-commerce:

any login/password or certificate based solution doesn't prove that the client really is Mr. John Doe. It proves only that the client knows Mr. John Doe's access info (or that it has access to his computer). That's all! Whether it's Mr. John Doe or some nasty hacker Mr.X - it's your own interpretation of facts. It's OK if you are SysAdmin granting access to data (no money involved). Here you are king and whatever you decide - it will be so.

But as long as any money involved, you automatically fall under the civil laws of your country. And interpretation of facts goes to lawers. And on the current stage the law stays on the side of customer, not business. So next time someone calls you and says "Hey, it was not me, it was my son playing with my credit card" - you better promptly make him refund. You might use a million bit protection, certificates signed by Lord himself - it does not help you in the court. (speaking from my own experience).

So in e-commerce some changes should be done first. Either try implement the old "off-line" authentification online (reading retina pattern, finger-prints, voice recognizion) So far nothing of above works well and it costs times more than the computer itself.

Or impose the responsibility to the customer. "Somebody used your card? Too bad, but nothing we can do".

Or something third...

Any way - this problem goes to legal aspect.


note that a lot of the rules are currently in place because it is relatively simple and common to have fraudulent unauthenticated transactions (witness all the news stories about web sites having account number harvesting exploits).

the gole of x9a10 working group for x9.59 (electornic payment standard for all account-based transactions) was to preserve the integrity of the financial infrastructure (aka authenticated transactions and eliminate the account number harvesting exploits).

a lot of cost, pain, and fraud can be eliminated from the existing environment with simple authenticated transactions (even simple two factor w/o requiring biometrics).

as been represented several places the issue of non-repudiation is a whole 'nother matter.

also see discussion regarding 3-factor authentication in "distributed authentication" thread in sci.crypt ng
https://www.garlic.com/~lynn/2001g.html#1

misc. refs:
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/2001f.html#77
https://www.garlic.com/~lynn/2001f.html#79
https://www.garlic.com/~lynn/2001g.html#0
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FREE X.509 Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FREE X.509 Certificates
Newsgroups: comp.security.firewalls,comp.security.ssh,comp.lang.java.security,alt.security.announce
Date: Fri, 29 Jun 2001 02:51:56 GMT
Anne & Lynn Wheeler writes:
misc. refs:
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subpubkey.html#radius


finger slip ... that is
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subpubkey.html#radius

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Apple/PowerPC rumors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Apple/PowerPC rumors
Newsgroups: comp.arch
Date: Fri, 29 Jun 2001 13:26:34 GMT
gherbert@gw.retro.com (George William Herbert) writes:
Given the recent discussion of whether Apple was using their Cray for CPU designs or not, this rumor is amusingly synchronized... little birdie flying around chirps that Apple is going to take over the PowerPC CPU development, with IBM apparently concentrating on the newer fuller-featured POWER side of the family. Also apparently that PowerPCs which are much, much closer to the current high end x86 processors will be out "shortly"... 1.6 GHz anyone?

friend of mine at one point said he was programming a cray for human factors group at apple ... (over) driving a large frame-buffer with large display ... and investigating numerous human factors issues, trade-offs, perception threshholds, etc.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Public key newbie question

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public key newbie question
Newsgroups: alt.computer.security
Date: Fri, 29 Jun 2001 14:07:22 GMT
acaversh@yahoo.com (Arthur Caversham) writes:
I am new to the topic of public key encryption and cannot understand how, if the public key is known, it cannot simply be reversed to obtain the private key.

E.g. if a public key is to encrypt a message by increasing the ascii of every letter by 2 then the corresponding private key would simply be to decrease the ascii of everything by 2.

I am sure real life encryption is more complex but it must be possible to generate the private key from the public.

I would appreciate any urls explaining this in laymans terms


there are two parts of public key cryptography ... the asymmetric cryptography/mathematical part ... basically two different, but related keys, not able to (easily) infer the value of one key by knowing the other (and encryption with one key and only be decrypted with the other key); and the "business" process part

Given asymetric cryptography with two related keys, the business process specifies that one key can be publicly published and distributed (the "public" key) and nobody (but possibly the "key-owner") is allowed knowledge of the other ("private") key.

Given the public/private key business process convention use of asymmetric cryptography, anybody encrypting something with a "public" key is assured that only the owner of the corresponding "private" key can decrypt the information. This addresses the problem with secure distribution of symmetric, secret keys in more conventional cryptography (how can I securely send somebody a secret key so they can send back to me some securely, encrypted information).

Note that this convention of public/private key is not a cryptography issue (like asymmetric cryptography), it is a defined business process regarding the handling of the two different keys of the key pair.

The other business characteristic is that anything decryptable with a particular "public" key can be known to have originated with the owner of the corresponding "private" key. While this doesn't provide secure communication (since anybody with access to the published "public" key can decrypt the information), it can be used for a "business" process authenticating the origin of the encryption (i.e. only the owner of the "private" key could have originated the encryption).

The "authentication" business process has been further formalized into something called a "digital signature". Given that sufficient care has been taken to allow only the "owner" to have access and use of a "private" key, then there is an high degree of confidence that anything that is decryptable with the corresponding "public" key can only have originated with the owner of the specific "private" key.

A specific implementation of such a digital signature business process, is to calculate a secure hash of the message (say with SHA-1) and then encrypt the resulting secure hash with the originator's "private" key. Both the original message, and the encrypted secure hash of the message are transmitted. The recipient decrypts the secure hash with the corresponding public key, recalculates the secure hash of the message and compares the two values. If the two secure hashs are equal then the recipient knows 1) that the message hasn't been tampered with and 2) the message originated with the owner of the corresponding private key.

So the public key business process use of asymmetric cryptography can be used to address two issues:

1) distribution of keys for secure communication 2) authentication of the origin of a message (along with validating that the message hasn't been tampered with).

Business processes like PGP and SSL can combined all:

1) digitally sign a message with the sender's private key 2) randomly generate a secret key 3) encrypt the message & signature with the secret key 4) encrypt the secret key with the recipient's public key 5) transmit the secret-key encrypted message and the public-key encrypted secret key

Typically, private key encryption of data is a very expensive and timely process ... so it is much more efficient to encrypt large amounts of information with secret key. Randomly generating a secret key, and then encrypting the secret key with the recipient's public key also achieves the objective of secure distribution of keys for encrypting secure communication.

NIST standards for secure hash (FIPS 180) and digital signatures (FIPS 186)
http://csrc.nist.gov/publications/fips/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Extended memory error recovery

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Extended memory error recovery
Newsgroups: comp.arch
Date: Fri, 29 Jun 2001 18:47:07 GMT
at150bogomips@aol.com (At150bogomips) writes:
Yuck! 5% seems rather high. How can do high-reliability systems deal with such? Is a 1 in 20 trillion unrecognized error rate tolerable while a 1 in 1 trillion recognized but unrecoverable failure rate is not tolerable (so RAID and backups are used for recoverability)?

i thot that disk bit error rate (with various kinds of EC, FEC, etc) was possibly somewhere in the 10-15 range and drives are somewhere in the 800k hr MTBF.

I would expect some of the IDE (and possibly some scsi) wires and connectors might introduce higher bit error rate.

Various kinds of "parity" RAID (4+1, 8+1, 8+2, 32+8) provide additional redundancy both at a record/bit level and at a whole drive level.

As to memory, there are some PC memories that provide 8+2, 8/10; correct one bit errors and recognize two bit errors. At least some mainframe memories sometime in the '70s or '80s went to things like 64/80 ... effectively same ratio of error correcting bits but capable of correcting 15 bit errors and recognizing 16 bit errors.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Fri, 29 Jun 2001 19:43:49 GMT
Jim Watt writes:
Well that assumption is at varience at what they say - from the BT website (UK verisign affiliate)

"A Server Certificate is issued by a trusted third party called a Certification Authority (CA). A CA acts somewhat like a passport office. CAs must take steps to establish the identity of the people or organisations to whom they issue certificates. Once the CA establishes an organisation's identity, it issues a certificate that contains the organisation's public key and signs it with the CA's private Key."

They are talking about 'routine background checks and getting a lot more information than that available from the domain name infrastructure.


they can put whatever they want to into an SSL server certificate ... the practical issue is that nobody actually reads the contents of an SSL server certificate ... for all practical purposes for millions of certificate operations that happen around the world each day ... the only thing that happens is that the URL domain name specified by the client in the SSL/HTTPS operation is compared to the URL domain name specified in the certificate.

The CAs can do all the checking they want to about lots of other random pieces of information and include all the random pieces of information they want to in an SSL server certificate ... but it effectively is still random, unrelated information unless somebody actually pays attention to it.

Because, for all practical purposes, the only piece of information in an SSL server certificate that is ever checked is the domain name ... all that other information can be regarded as extraneous.

As to the domain name, the authoratative agency for domain name ownership is the domain name infrastructure. The certification authority can do all the checking they want to with regard to domain name ownership, but if somebody has spoofed the domain name infrastructure, the certification authority can do little else but certify the spoofed information.

Other than that, all you need is some registered business. The CA verifies that there is some registered business and that registered business is listed as the domain name owner with the domain name infrastructure. It is extremely easy to register a business and there are reported instances of domain name take-over/spoofing.

certification authorities check that there is some registered business and that some registered business is listed as the domain name owner with the domain name infrastructure.

A client would actually have to examine a certification authority's cetificate for the registered business to see if it is the business it thinks it is. Since that is never done (for all practical purposes), one can discount that as having any meaning. For all practical purposes, the only thing that is examined is the whether the client specified domain name is the same as in the certificate.

Even in the extremely unlikely event a client actually examined a certification authority's certificate for the registered business, it still might not mean anything (say this happens possibly once out of every trillion uses of a SSL certificate). A business may have a common name, a legal name, as well as a DBA (doing business as) name. A client would probably be aware of the common name. However, the legal name and the DBA name can be totally different and that is probably what is listed in a D&B or other check that might be typically done by a certification authority.

Some of this has been discussed in various threads referenced by

https://www.garlic.com/~lynn/subpubkey.html#sslcerts

and we had to delve into this when doing due diligence as part of doing the original work on the current infrastructure ... misc refs:

https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

A trivial example in one of the threads referenced in the "sslcerts" collection is an incident buying computer gear from a local computer store with american express card ... and having the entry on the american express bill list a totally unknown company.

The store name was short catchy name that was the common name, but american express listed the company's "legal name" which was something to the effect of "Woman's Minority Title ??? Company #17" (I forget what the ??? was ... but something to do with eligibility for preferences in federal contracts).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Fri, 29 Jun 2001 19:54:07 GMT
Jim Watt writes:
Well that assumption is at varience at what they say - from the BT website (UK verisign affiliate)

"A Server Certificate is issued by a trusted third party called a Certification Authority (CA). A CA acts somewhat like a passport office. CAs must take steps to establish the identity of the people or organisations to whom they issue certificates. Once the CA establishes an organisation's identity, it issues a certificate that contains the organisation's public key and signs it with the CA's private Key."


... aka, there all sorts of things that a server certificate might be and there are all sorts of things that a trusted third party might be and there are all sorts of things that a PKI might be ....

but I have been referring to the actual business process that goes on millions of times each day involving SSL/HTTPS with server domain name certificates.

If is like a "passport" ... then there is a process when going through a line at customers where they check all the information in the "passport" ... name, picture (see if you look like the picture), etc. ... as well as do some real-time check regarding whether the passport as been reported lost or stolen.

However, if it is something that only has a very cursory check regarding the name, no check of the picture, and no check of whether the passport has been reported lost or stolen ... then you can make all the claims in the world as to the manufacturing of the passport (aka certificate) ... but I don't have to worry about any of the "business processes" normally associated with the use of a certificate.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VPN solution for school district

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VPN solution for school district
Newsgroups: comp.security.firewalls
Date: Fri, 29 Jun 2001 20:04:07 GMT
jayson@bcschools.net (Jayson) writes:
While I am not the head of this operation, I was asked to gather some information about how our district can allow staff members to access their network drives on our Novell 5.1 network from home. After talking to a Cisco rep yesterday, I came across the idea of a VPN. I am not quite sure exactly what this entails, but my solution involves the Cisco 3015 VPN Concentrator. The Cisco Client, available free of charge, will be installed on the home computers. The Cisco rep also made reference to authenticating the users on the corporate side with a RADIUS server to access the Novell Network. I was told that we will not be implementing Border Manager as there are "issues" with it. Therefore, I have a few questions.

1. Is a RADIUS server what we need to authenticate users that are connecting to their ISP to the Internet to the school district?


slightly related posting on RADIUS in a thread in sci.crypt ng
https://www.garlic.com/~lynn/2001g.html#1

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Fri, 29 Jun 2001 21:58:46 GMT
Jim Watt writes:
Well, I was under the impression that unless it was signed by an authority that has a key included in the browser that a warning message flashed up. I've certainly seen this whilst using secure servers. Whether one accepts it from then on is a matter of ones personal paranoia/trust level.

The ease of registering a business depends on the jurisdiction you are in, indeed before too long you may have to provide a DNA sample to open a bank account or get on an aircraft. (for your security and convenience)

However, having had a peek at your website its comprehensive.


the certificate has to be signed with the private key of some authority that has the corresponding public key already loaded into the browser.

that isn't the issue.

the issue is that anybody that has registered a business (which can be done for as little as $10 dollars at some state or county office) and performed a domain name hijack ... can go to almost any of the certification authorities (that have their public key already loaded into browser) and get a valid server domain name certificate.

one list of certification authority candidates from existing browser

https://www.garlic.com/~lynn/aepay4.htm#comcert14
https://www.garlic.com/~lynn/aepay4.htm#comcert16

aka any server domain name certificate signed by corresponding private key with a public key loaded into the browser won't put up a message.

and reference to the TLS (SSL) protocol

https://www.garlic.com/~lynn/aepay4.htm#comcert15

longer list of various discussion threads:

https://www.garlic.com/~lynn/subpubkey.html#sslcerts

one press release regarding domain name hijacking

https://www.garlic.com/~lynn/aepay4.htm#dnsinteg1

as stated previously, one of the solutions to address domain name infrastructure integrity issues is to ask entities registering a domain name to also register a public key.

however, if you fix integrity issues with the domain name infrastructure for use by the certification authorities ... it is also addressed for everyone.

furthermore, if the approach to addressing integrity issues is done by registering public keys ... then those public keys could be made available as part of the basic domain name infrastructure ... aka online, real-time public key distribution ... in the same way the domain name infrastructure provides online, real-time resolution of domain name to ip-address.

As previously claimed, the original design point for certification authorities and certificates is the offline world where it isn't possible/practical to contact the real-time, online authority.

As part of improving the certificatable (i.e. trust) of the domain name infrastructure for use by the certification authorities by registering public keys ... creates the opening to significantly improve and simplify the SSL/TLS protocol by using the standard domain name infrastructure for real-time, online, high-integrity public key distribution (instead of resorting to a complicated, convoluted paradigm designed to meet the requirements of an offline world).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 29 Jun 2001 23:09:03 GMT
jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) writes:
Because Fortran produced efficient object code, people were willing to make use of it; in those days, efficiency was at a premium, and if it produced code that ran only 1/2 as fast, never mind 1/10 as fast, as hand-coded, most computing centres would have indignantly banned it.

there have been a couple of trade-offs ... development and student programming (at least on ibm mainframes) had Fortran G and WatFor for fast compile and turn-around.

Much of the work for IBM's (mainframe) Fortran HX was done at the Palo Alto Science Center.

There was also some people at SLAC (on Sand Hill, instead of Page Mill in Palo Alto) who not only were heavy Fortan HX users but also heavily contributed to IBM's Assembler H ... they specialized in taking critical inner loops and hand-coding in assembler to get additional performance (i believe that one of the SLAC guys frequently claimed he could still get another ten times in various critical sections with hand coded assembler).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Fri, 29 Jun 2001 23:34:47 GMT
Jim Watt writes:
OK, but if thats the case the certificate authority is not doing its job properly as they should investigate the company they are issuing the certificate to. If thats the case (in the US) its pretty worthless. The only real point of them signing it is the trust that action confers.

Otherwise its simply a matter of the technical problem of issuing a certificate, which I expect we can both manage.

After strarting the discussion (and finding what I believed to be true to be less than accurate, which is the point of the process) I also checked Netscape 4.47 which does have a list of authorities it trusts, but MSIE 5 seems not to tell me who anything.

I take your point - Tis a wicked world.


a part of the issue is that the majority of the certification authorities are not themselves the authoritative agencies for the information they are certifying. as a result, they have a difficult time providing a higher level of trust than the authoritative agencies they have to rely upon as to the accuracy of the information.

in effect, with regard to server domain name certificates they can certify that the entity requesting a server domain name certificate is the same as that listed by the domain name infrastructure, and that the entity is also a valid registered business.

certification authorities can talk about how strong their crypto is and how secure their operations are and how they always follow a detailed process ... but since they aren't authoritative agencies for the information they are certifying, the level of trust is only as good as the underlying operations they are certifying.

For an extreme analogy, lets say they are an armored truck business that has the best and most secure technology and processes in the world and they are working for a financial institution that uses unguarded and unprotected janitor closets instead of vaults for keeping money. Can they prevent somebody from taken money out of the janitor closet?

as a somewhat sidepoint, the embodiment of most existing certification authority operations, the "certificates" are artifacts that are designed to meet the needs of an offline world where relying parties don't have real-time, online communication with the respective authoritative agencies.

These certificates are the "letters of credit" from the days of sailing ships, the "signing limit paper checks" from by-gone business operation days, and the plastic, offline credit cards (before the days of online credit card transactions).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 30 Jun 2001 15:26:34 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
Anne, I think you've just shifted the time frame forward by a decade or so. The original FORTRAN was written for machines in the early 1950s when the time-to-execute for even a tightly-written program could almost be measured by a wall calendar rather than a wall clock. FORT-G is of course an OS/360 component from the late 1960s, and WATFOR was written for the 704x...maybe available in 1965? (Anyone from Waterloo here to check this?)

... lynn ... but yes
At least as late as the early 1960s I recall student runs on a 7090 being handled as a special case, including the use of a customized system tape. (They were also given at least one minute of time each, since that was the granularity of the time limit field in FMS...)

when I first ran student fortran job, it was ibsys monitor on 709 and then the university replaced it with 360/67 and student jobs went from possibly jobs/seconds to tens of seconds per job. By that time, the university had hired me and given me responsible for much of the stuff. Combination of HASP and WATFOR eventually saved the day. Basically student jobs were all compile and effectively no execution (predominately because of errors of one sort or another). WATFOR was a single step monitor that handled all the (student) job-to-job transition internally and (at the time) compile was rated something like 20,000 statements/cards per minute on 65/67 (typical student jobs were 10-50 cards). Given that 2540 was only 2000 cards/min, something like HASP was necessary for it to achieve its thruput.

>There was also some people at SLAC (on Sand Hill, instead of Page Mill
>in Palo Alto) who not only were heavy Fortan HX users but also heavily
>contributed to IBM's Assembler H ... they specialized in taking
>critical inner loops and hand-coding in assembler to get additional
>performance (i believe that one of the SLAC guys frequently claimed he
>could still get another ten times in various critical sections with
>hand coded assembler).

the primary person at PASC had also earlier done the APL microcode assist for 370/145.
H'mmm ... I'll bet that you're thinking of John Ehrman, right? I never heard him make that claim, but it sounds like something he would say.

... no, try Greg M? something (I would have to search for it)
John (at SLAC at the time, before he moved to IBM) was the designer of the SHARE button that read:

FREE
THE
     FORTRAN
77

when the product was first released. At the next meeting he had edited the text to read:

FIX
THE
     FORTRAN
77

with the "editing" clearly (and deliberately) visible.

Joe Morris


totally random stuff
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IA64 Rocks My World

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IA64 Rocks My World
Newsgroups: comp.arch,comp.os.vms,comp.sys.dec,alt.folklore.computers
Date: Sat, 30 Jun 2001 18:51:28 GMT
mccalpin@gmp246.austin.ibm.com (McCalpin) writes:
I got one of the earliest RS/6000's in the Spring of 1990. They were not single-chip systems -- maybe eight chips?

The PowerPC 601 was the first single-chip processor in the extended family.

The POWER side of the line did not become single-chip until P2SC (POWER2 single-chip).


I've got a paper weight on my desk with six chips in it ... that says 150 million ops, 60 million flops, 7 million transistors.

there was a RSC ... referred to as RIOS .9. I don't remember how much influence it had on somerset (aka powerpc). It was used in (live) oak (i.e. 4-way SMP box that got around the lack of cache coherence by flagging virtual segments as to cach'able or non cach'able; aka if it really needed to be shared, it was flagged as never going thru the cache).

during part of the 6000 period my wife was manager, rs/6000 engineering achitecture and her manager went on to head up somerset with the motorola people (he had previously worked for motorola before joining ibm to work on rios; he later went on to be president of MIPs and then later returned to austin w/ibm). Also, when we started the HA/CMP project, it started out reporting to him.

the way i remember rios was that (at least) non-cache coherence permeated whole sections of the design (even RSC) which had to be significantly reworked for power/pc (as well as the whole virtual memory segment register stuff).

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/93.html#22 Assembly language program for RS600 for mutual exclusion
https://www.garlic.com/~lynn/94.html#41 baddest workstation
https://www.garlic.com/~lynn/94.html#47 Rethinking Virtual Memory
https://www.garlic.com/~lynn/95.html#9 Cache and Memory Bandwidth (was Re: A Series Compilers)
https://www.garlic.com/~lynn/95.html#11 801 & power/pc
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/97.html#25 Early RJE Terminals (was Re: First Network?)
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/99.html#23 Roads as Runways Was: Re: BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000c.html#4 TF-1
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#12 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000d.html#2 IBM's "ASCI White" and "Big Blue" architecture?
https://www.garlic.com/~lynn/2000d.html#24 Superduper computers--why RISC not 390?
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#31 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#39 Future hacks [was Re: RS/6000 ]
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000f.html#13 Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2000f.html#74 Metric System (was: case sensitivity in file names)
https://www.garlic.com/~lynn/2001.html#72 California DMV
https://www.garlic.com/~lynn/2001b.html#33 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

XML: No More CICS?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XML: No More CICS?
Newsgroups: bit.listserv.ibm-main
Date: Sat, 30 Jun 2001 22:32:30 GMT
jfregus@IX.NETCOM.COM (John F. Regus) writes:
Has XML and HTML sounded the death knell for CICS for developing online transaction processing?

actually HTTP servers would benefit from fast transaction processor backend ... taking current webservers (plus maybe firewall function) and stripping it way down to the point where it did little more than handled the HTTP protocol and pass it to a real backend (lightweight, multithreaded) transaction processor.

for the most part the "ML" are the stuff for client presentation services (claim could be made that the extensive vm/cms & GML installation at cern contributed significantly to evolution of HTML).

then it becomes an issue of writing cics transactions that support the designed http & application functions.

in that respect, cics could make a superior lightweight, multithreaded transaction processing backend compared to various other existing web server technologies.

... one of the CICS pre-release beta-test customers in '69

.... and a little involvement in some webserver aspects
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

random refs
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/94.html#33 short CICS story
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/96.html#9 cics
https://www.garlic.com/~lynn/97.html#6 IBM Hursley?
https://www.garlic.com/~lynn/97.html#8 Ancient DASD
https://www.garlic.com/~lynn/97.html#30 How is CICS pronounced?
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/98.html#33 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/99.html#130 early hardware
https://www.garlic.com/~lynn/99.html#218 Mainframe acronyms: how do you pronounce them?
https://www.garlic.com/~lynn/2000b.html#41 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000e.html#23 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2000f.html#61 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001.html#51 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001d.html#56 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Sat, 30 Jun 2001 22:58:00 GMT
Jim Watt writes:
I took a re-look at the Gibraltar e-commerce law, this is based on current EU directives, however we are the first to get these transcribed into law and on the books. Currently electronic signatures have the same standing as traditional ones.

It lays down an operating framework for approved certificate authorities and they are liable if they fail to 'take all measures reasonably practical to verify information' and specifies that they can be sued for loss or damage by someone who uses a certificate issued by them.

it also gives electronic versions of documents the same validity as paper ones.


my wife and I had (very) small part in both the cal. state and the federal e-signature laws.

one of the issues is the necessity for having CA/TTPs and x.509 identity certificates at all in consumer related activities .... because of all the unnecessary liability issues as well as severe privacy exposures.

as in other previous discussions & postings ... it is possible to do electronic signatures and even digital signatures w/o requiring certificates. A trivial analogy is the "signature card" that is signed for a financial institution ... when they get paper signature, they then can compare it to the signature card to see if it is correct. Effectively, the same business process can be used with regard digital sigatures i.e. the consumer doesn't have to carry the digital signature card around with them and/or attach copies on every transaction, it is just sufficient that it is stored and available (online).

there are additional significant issues with any sort of electronic signing ... similar to what some software vendors have been grappling with regarding disclosiers and software licenses (the "i accept" buttons) ... aka just because somebody's computer does some fancy operation ... can it be prooven that the consumer was absolutely aware of every operation that their computer performed and furthermore was the computer doing operations only at the behest of that comsumer (and the computer unable to perform any operation that wasn't under the direct and explicit control of the consumer).

... it is 10pm, parents, do you know what your computer is doing?

random refs:
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subpubkey.html#radius

some of the more interesting writings on the subject is by Jane Winn at Southern Methodist University


http://www.smu.edu/~jwinn/ NOTE moved to
http://www.law.washington.edu/Faculty/Winn/

some misc. refs from the site
E-Sign of the Times, E-Commerce Law Report (August 2000)(with Robert A Wittie, Esq.)

This is a concise overview of the provisions of the Electronic Signatures in Global and National Commerce Act of 2000, and a comparison of some of its key provisions with those of the Uniform Electronic Transactions Act.

Electronic Records and Signatures under the Federal E-Sign Legislation and the UETA (with Robert A. Wittie, Esq.) 54 The Business Lawyer 293 (2000)

This is an analysis of the provisions of the Electronic Signatures in Global and National Commerce Act of 2000, and a comparison of its key provisions with those of the Uniform Electronic Transactions Act.

The Emperor's New Clothes: The Shocking Truth About Digital Signatures and Internet Commerce, __ Idaho Law Rev. __ (forthcoming 2001 UETA Symposium Issue)

Although it has been an article of faith in some quarters for many years that using digital signatures as signatures on Internet contracts would be the "next big thing" in electronic commerce, there is actually considerable evidence that they are not being used that way and may never be used that way. UETA avoids confusing marketing hype and business reality by adopting a technology neutral stance.

Is the EU the US Online Consumer's Best Friend? __ American University Law Review __ (forthcoming 2001)

The US and EU are approaching online consumer protection issues from very different perspectives. In some contexts, online consumers may be better protected under the US approach, and in others, under the EU approach. In some instances, however, regulators in both markets may be failing to develop organized responses to major challenges.

The Hedgehog and the Fox: Distinguishing Public and Private Sector Approaches to Managing Risk for Internet Transactions, 51 ABA Administrative Law Review 955 (1999)

This article argues that much recent and proposed electronic commerce legislation is based on flawed assumptions regarding risk management and the practical utility of current electronic commerce technologies. Such flawed legislation would produce a loss allocation system that would undermine incentives that currently exist to improve the technological infrastructure of Internet commerce. This paper was presented at a conference at American University in March 1999.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

distributed authentication

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: distributed authentication
Newsgroups: sci.crypt
Date: Sat, 30 Jun 2001 23:07:37 GMT
wahern@25thandClement.com (William Ahern) writes:
This is why I was orignally more focused on identity and not authorization. My idea of identity was an identity used to bootstrap yourself into a system. Though, I do not fully understand the problems of malicous elements 'cooperating'. I was hoping introducing a mechanism of injecting an identity into a system non-deterministically would somehow thwart such cooperation.

If I can possiblly be more vague ;) The Freenet Project speaks of the system being built to fight the cancer of malicous nodes in the network. I haven't found hard materiual on how THAT works... which I believe is a very similar problem to what affects the web-of-trust. Thus my queries being put to this news group.


as an aside from yesterday

        RFC 3127

        Title:      Authentication, Authorization, and Accounting:
Protocol Evaluation
Author(s):  D. Mitton, M. St.Johns, S. Barkley, D. Nelson,
B. Patil, M. Stevens, B. Wolff
        Status:     Informational
Date:       June 2001
        Mailbox:    dmitton@nortelnetworks.com,
stjohns@rainmakertechnologies.com,
stuartb@uu.net, dnelson@enterasys.com,
Basavaraj.Patil@nokia.com, mstevens@ellacoya.com,
                    barney@databus.com
Pages:      86
        Characters: 170579
Updates/Obsoletes/SeeAlso:    None

I-D Tag:    draft-ietf-aaa-proto-eval-02.txt

URL:        ftp://ftp.rfc-editor.org/in-notes/rfc3127.txt

This memo represents the process and findings of the Authentication,
Authorization, and Accounting Working Group (AAA WG) panel evaluating
protocols proposed against the AAA Network Access Requirements, RFC
2989.  Due to time constraints of this report, this document is not as
fully polished as it might have been desired.  But it remains mostly
in this state to document the results as presented.

This document is a product of the Authentication, Authorization and
Accounting Working Group of the IETF.

they reference a number of additional reports in progress

one way of getting to it is
https://www.garlic.com/~lynn/rfcietff.htm

& in the bottom frame scroll down to 3127.

if you click on "3127" ... it brings up all the keywords that it are used for that rfc. clicking on any of those keywoards will bring up a list of all RFCs associated with that keyword (i.e. RFC->keyword as well as keyword->RFC).

instead, if you click on "txt=nnnn" it will retrieve the actual RFC (via http).

It is also possible to retrieve it via FTP using the URL mentioned in the announcement.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Golden Era of Compilers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Golden Era of Compilers
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 01 Jul 2001 01:39:53 GMT
brightside writes:
Would you like to reconsider that rate. The IBM 084 (sorter)was the fastest card feeder around at the time of the 2540, and the 084 made 2000cpm. But to do that it required vacuum fed feed knives, which the 2540 never had. ISTR the 2540 was 1000cpm.

oops, finger/brain check

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cray at Apple (was Re: Compaq confirms Intel Alpha spin-off)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cray at Apple (was Re: Compaq confirms Intel Alpha spin-off)
Newsgroups: comp.arch
Date: Sun, 01 Jul 2001 01:46:38 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
The engineering divisions of Ford and GM probably use Microsoft Word and Excel to write various documents and reports. Would you say that "GM uses Microsoft Office to design cars?" Technically it might be true in a limited sense, but it doesn't really give an accurate impression.

somewhat ot ...
https://www.garlic.com/~lynn/2000f.html#43 Reason Japanese cars are assembled in the US

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

any 70's era supercomputers that ran as slow as today's supercomputers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: any 70's era supercomputers that ran as slow as today's  supercomputers?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Jul 2001 15:40:40 GMT
kragen@dnaco.net (Kragen Sitaker) writes:
So how does this actually work? I've seen you and someone else explain that "clock" swaps pages out "like the hands of a clock sweeping through the address space", but I don't quite understand what that means. Does it mean this?

one bit, one handed clock is something like

lm  r2,r4           ... load saved registers
loop ds 0h
rrb  r2             ..  reset & test reference bit
  bz   takeit         ..  take reference page
bxle r2,r4,loop     ... loop until we have page
l    r2,start       ... didn't find anything, start again
b    loop           ... restart loop

takeit ds 0h
  bxle r2,r4,+8 ... update pointer past page taken
l    r2,start  ... looped, start at other end
st   r2,       ... save restart point

rrb instruction did a reference bit test and reset.

bxle incremented a pointer and compared the resulting value and took a conditional branch.

if a page wasn't "taken" because its reference bit was on, it had until the algorithm completely looped around memory examining all other pages for program activity to reference the page again. On some systems we measured the "average depth of search" was four to six pages (i.e. the inner loop examined six pages until it found one).

Given the MTBF (mean time between page faults) and the total number of pages, you could calculate the average time between testing a particular pages bits i.e. if average depth of search per fault was six, the total number of pages was 600, and the MTBF page faults was 10ms, then the average time for the algorithm to loop completely around once was (600/6)10ms ... aka 1 second.

So on the average, the reference bit of pages would fall into two categories, 1) those pages that had been referenced at least once a second (i.e. once second between having the reference bit tested and reset) and 2) those pages that had not been referenced at least once a second (and would be selected for replacement).

It approximated LRU by dividing all pages into two categories 1) those that had been referenced recently ... aka since the last reset and 2) those pages that had not been referenced recently ... aka since the last reset.

As the hand "sweeps" around memory, examining pages ... it harvests pages that don't have their reference bit turned on

This is the original, one-handed algorithm that i did on 360/67 for cp/67 in the 1968 time-frame as an undergraduate (and ibm incorporated into the product and shipped to customers). It differed from the above in that the RRB had to be several instructins ... since RRB instruction wasn't introduced until the 370s.

Later, after graduation I did both multi-bit and two-handed clock in the early '70s.

On the 370, there was only a single hardware bit, so multi-bit algorithm had to be simulated in software. A multi-bit algorithm, instead of resetting a single bit, involved multi-bits. It can be thot of as a one bit logical shift of a multi-bit value ... with a zero bit introduced from one side and a bit dropping off the other side. The page is harvested if all bits are zero.

In effect, each bit would represent an amount of "history" equal to the complete sweep of the algorithm around all pages. Instead of having a single bit of history that represents a single sweep around memory, there are N bits, representing the state of the last N sweeps around all pages.

Therefor pages are divided into two categories, 1) pages that have been referenced sometime within the last N sweep(s) around memory and 2) pages that haven't been referenced in the last N sweep(s) around memory (and are harvested).

A two-handed clock, offsets the testing and resetting of pages. The two hands are separated by some number of pages ... say 1/2 of all pages. As the testing hand bumps one page at a time, searching for a page to harvest, the resetting hand is 1/2 of all pages "ahead" bumping along at the same rate. A one-bit, two-handed algorithm that is off-set by 1/2 of all pages means that the reference bit only represents the amount of history equal to the time for the algorithm to sweep 1/2 of the available pages (rather than a complete sweep of all pages).

In the previous example of MTBF of 10ms, 600 total pages, and avg. 6 page depth of search, the "resetting" hand ... rather than being exactly one sweep ahead of the "testing" hand it is only 1/2 sweep ahead. So that is 300 pages ahead ... (300/6)10ms equals .5 seconds. In this scenerio, pages are divided into two categories, those that have been referenced in the last 1/2 second (and won't be harvested) and those that have not been referenced in the last 1/2 second (which will be harvested).

In a whole, series of detailed similation tests with full instruction and page refence traces, clock sweep algorithm comes within 10-15% of the accuracy of "true" LRU with almost negligible overhead.

Now, LRU is based on the assumption that the most recently used pages are the most likely to be used again in the future. It turns out that isn't alwas the case. There are lots of situations where that doesn't happen. That gave rise to the variation on two-handed, N-bit clock that would automagically switch back and forth between 1) LRU-approximation (as in the above description) when LRU appeared to be performing well and 2) random when LRU appeared to not be performing well.

random refs:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#6 Self-virtualization and CPUs
https://www.garlic.com/~lynn/93.html#8 PowerPC Architecture (was: Re: PowerPC priced very low!)
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning
https://www.garlic.com/~lynn/94.html#10 lru, clock, random & dynamic adaptive
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#31 High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#51 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/95.html#2 Why is there only VM/370?
https://www.garlic.com/~lynn/95.html#12 slot chaining
https://www.garlic.com/~lynn/96.html#0a Cache
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/96.html#10 Caches, (Random and LRU strategies)
https://www.garlic.com/~lynn/96.html#11 Caches, (Random and LRU strategies)
https://www.garlic.com/~lynn/97.html#0 LRU in caches with bit decay
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#54 qn on virtual page replacement
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/99.html#20 APL/360.
https://www.garlic.com/~lynn/99.html#78 Mainframes Relevant?
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/99.html#175 amusing source code comments (was Re: Testing job applicants)
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists.
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000f.html#9 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#10 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#32 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000g.html#2 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#15 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#68 California DMV
https://www.garlic.com/~lynn/2001b.html#17 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement
https://www.garlic.com/~lynn/2001d.html#24 April Fools Day
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#52 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#53 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#54 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#55 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#67 IBM mainframe reference online?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Jul 2001 15:58:32 GMT
"Gary A. Gorgen" writes:
"We" = "The wollongong Group", in this case.

I don't normally do this, but it may help decipher my ramblings. Tymshare 1966-1973 Cupertino Interdata 1975-1980 Santa Clara Wollongong 1980-1983 Palo Alto


we worked with interdata 3 in the late 68 / early 69 time frame & it didn't have any unix on it, :-)

we built a 360 channel attach board and programmed the interdata 3 to simulate a 360 telecommunication control unit (getting blamed for originating the 360 PCM controller market).

random ref:
https://www.garlic.com/~lynn/submain.html#360pcm

tymshare became one of the large vm service bureaus in the 70s and continued up to thru the 80s when MD bought it up.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Root certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root certificates
Newsgroups: alt.computer.security
Date: Sun, 01 Jul 2001 16:02:08 GMT
Jim Watt writes:
It lays down an operating framework for approved certificate authorities and they are liable if they fail to 'take all measures reasonably practical to verify information' and specifies that they can be sued for loss or damage by someone who uses a certificate issued by them.

... however, if they are not the authoritative agency responsible for the information being verified ... "reasonably practical" means contacting the authoritative agency responsible for the information. The authoritative agency responsible for the information could still have it wrong ... and the certificate authorities are not liable.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Jul 2001 18:45:18 GMT
"Gary A. Gorgen" writes:
You better tell me this was later than 1970. :-)))) Because a 2314 glued to a Model 5, probably would have qualified to solve the above problem. Maybe a little over-kill, but... .

Tymshare was building a selector channel, at that time to glue a 2314 onto the 940. I got to know how the 360 channel worked, rrrreal well.

I snatched a 360 channel adapter, from Ocean Port. I spent 5 years trying to find a use for it, other than 270X. Never did. Left it in the office.


as an undergraduate, we built our own channel adapter board for the interdata/3 in late '68, early '69 time frame ... to use the interdate/3 as replacement for 2702 (i had tried doing dynamic terminal identification between 1052, 2740, 2741, and tty/ascii and while the 2702 SAD command allowed lines to be switched to different line-scanners, they had taken short cut in the 2702 and hired-wired oscillator to each line, in theory fixing the baud rate). In the interdata/3 we strobed the in-coming (line signal) at a high rate for rise/fall in order to dynamcially determine baud rate.

this is claimed to have been the first non-ibm 360 channel adapter card, and the first non-ibm 360-compatible control unit.

Later the implementation was expanded to an Interdata/4 with multiple interdata/3s ... the interdata/4 handling the channel adapter card and misc. other functions with the interdata/3s dedicated to line-scanner function. Perkin/Elmer bought interdata and the box was sold under the Perkin/Elmer name.

As late as 1997, in a machine room tour, I ran across a perkin/elmer box still in production use as telecommunication controller for ibm mainframe. Talking to somebody about it, it still used a wire-wrap board that was potentially the same or hardly changed from what we had started on in '68.

interdatas from the following list:
http://www.komkon.org/fms/comp/misc/List.txt


Interdata 3                             May 1967
                Interdata 2                             Jul 1968
Interdata 4                             Aug 1968
                Interdata 15                            Jan 1969

somebody in an interdata thread in this n.g. (in the past year or so), gave a much more detailed chronology ... which i can't seem to find at the moment.

random refs:
https://www.garlic.com/~lynn/submain.html#360pcm

in school, i was still mostly doing stuff related to algorithms that drove disk/DASD ... not much on the really low level hardware.

I got a lot more familiar in the '78 time frame when I supplied operating system to the engineering and product test labs (in bldgs. 14 & 15) so that all the san jose engineering development & test work went on under the operating system ... and any time the engineers couldn't find their bugs, they would blame it on me and I would have to come in and shoot the problem. It got to the point where I had a number of designs for an optimised controller & disk interfaces ... but couldn't get the incremental business case thru to have standard operating system, kernel, filesystem, etc. shipped customer support (even tho i would show five year amortized costs would actually be less cost than the existing plan). As an example, STL at one point quoted me $26m to ship modifications to MVS for "vtoc" & "pds" support in a non-CKD environment (even if all the code was supplied to them).

random refs:
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Jul 2001 19:33:22 GMT
"Gary A. Gorgen" writes:
You better tell me this was later than 1970. :-)))) Because a 2314 glued to a Model 5, probably would have qualified to solve the above problem. Maybe a little over-kill, but... .

somewhat totally unrelated, but in late '70s and early '80s, I would get by (at least once/month) Tymshare (in cupertino), PASC (on page mill in palo alto), HONE (on california in palo alto), SLAC (on sand hill), and/or Dialog.

I also got brought in to do the due diligence on Gnosis as part of the MD purchase of Tymshare and the Gnosis spin-off as keykos (somewhere i still have a gnosis manual).

misc. gnosis references
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.

random refs:
https://www.garlic.com/~lynn/subtopic.html#hone

the other way of "glueing" an IBM disk to some other device came along in the '80s with the HYPERCHannel A510 & A515 channel adapters (i.e. devices that emulated ibm mainframe channels).

We had been doing work with Network Systems just about from their startup. They had even build a "special adapter", the A720 to my wife's specs.

In any case, I developed some amount of operating system support and spent time debugging Hyperchannel boxes. One project involved remoted 300 of the IMS support group out of STL/bldg.90 to a different buidling 10-15 miles away (while the computer remained in bldg. 90), and allowing them to keep all their locally attached devices ... as if they were still in the same building.

I also did the IBM mainframe TCP/IP RFC 1044 support ... and while the standard product could nearly saturate a 3090 cpu getting 44kbytes/sec, I worked with Cray Research and tuned up RFC 1044 support that it ran at channel hardware speed between a 4341-clone and a cray (using only nominal amount of the processor of a 4341).

random refs:
https://www.garlic.com/~lynn/subnetwork.html#1044

Also, NCAR had a SAN (storage-area-network?) in the '80s using ibm disks and HYPERchannel A515s allowing various supercomputers to access data on IBM disks.

almost all the above was 80s tho.

random refs:
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Jul 2001 20:08:44 GMT
Anne & Lynn Wheeler writes:
In any case, I developed some amount of operating system support and spent time debugging Hyperchannel boxes. One project involved remoted 300 of the IMS support group out of STL/bldg.90 to a different buidling 10-15 miles away (while the computer remained in bldg. 90), and allowing them to keep all their locally attached devices ... as if they were still in the same building.

and for some real OT drift ...

STL/bldg. 90 was going to be dedicated the spring of '77.

The week before was (some) school spring break and I took the kids to washington DC. My oldest boy was especially taken with members of the San Fran Coyote union demonstrating on the steps of congress that week.

Now normally, bldgs/labs are named for the closest post office (if they aren't in a city &/or there is already lab with that name) ... in this case it is Coyote, California (95013) ... and bldg. 90 was going to be called the Coyote Lab. Well in the couple days between the demonstrations in wash. dc and the actual dedication of bldg. 90, everything was changed from the Coyote Lab, to Santa Teresa Lab (for the nearest cross street? ... bailey & santa teresa).

Now, I have no idea what they would have done if the demonstrations had occurred after the dedication.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Mon, 02 Jul 2001 01:12:37 GMT
kragen@dnaco.net (Kragen Sitaker) writes:
In your view, was KeyKOS really as technically superior as this article claims? Did you ever use it yourself?

tymshare was a service bureau. one of the objectives was that a gnosis could be used as a delivery vehicle for a lot of "3rd party" technology & offerings ... all wrappped together for customer solutions.

there was lots of capability based features ... but i estimated that possibly 1/3rd of the processor would be taken up in just updating the accounting information as capability boundaries were crossed (not only would consumer have rate billing ... but "3rd party" offerings would get rate reimbursement for consumer use of their offerings), a customer might get billed for doing some database analysis. That billing would include resource usage (disk, cpu, memory, etc). The actual analysis program, the database DBMS, the actual database, and the kernel would all have components and could be supplied by three different vendors; the kernel by Tymshare, and the other three pieces by three separate 3rd party vendors using Tymshare as delivery platform for their products. The consumer would get billed for the aggregate, but the 3rd party vendors would get a pro-rated re-imbursement of their share.

some amount of that disappeared once it got into keykos. keykos then started to look like a much better TPF offering than TPF (transaction processing facility, grew up out of PARS & ACP ... airline control program).

Tom Simpson at Amdahl was also trying to out "MVS" MVS about the same time.

One of the things they all learned was that building real mainframe, industrial quality operating system was expensive. Building low-level infrastructure that attempted to recover from all possible failures is an interesting task ... and then keeping up maintenance & life-cycle gets even more expensive (new device support with recovery for all possible failure modes is an example).

I ran into people at RSA '97 ('98?) still attempting to do something with Keykos ... i believe in the context of the Flex/OS (or one of the other intel-based mainframe emulators).

misc refs to operating system and supporting disk engineering & test environment (effort for "never fail")
https://www.garlic.com/~lynn/subtopic.html#disk

random pars/acp/tpf stuff:
https://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/94.html#12 360 "OS" & "TSS" assemblers
https://www.garlic.com/~lynn/94.html#26 Misc. more on bidirectional links
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/96.html#36 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#233 Computer of the century

possible Amdahl RASP/aspen/simpson stuff
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#41 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.

and gnosis/keykos
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

What was object oriented in iAPX432?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was object oriented in iAPX432?
Newsgroups: alt.folklore.computers
Date: Mon, 02 Jul 2001 01:16:55 GMT
Pete Fenelon writes:
A lot of the 432 ideas are paralleled in the IBM S/36 and S/38 which of course gave rise to the AS/400 -- but on the IBMs the object orientation is at the "system" level rather than purely in hardware (esp. as modern AS/400s are PowerPCs!). The Plessey 250 was capability based in a fairly similar way to the 432 too, but it was slightly older technology :) and it was actually fairly difficult to find anyone even inside Plessey who knew much about it (I spent several tedious summer vacations working for part of Plessey as an undergrad in the 80s.)

as an aside much of the s/38 was taken from the canceled FS (future system) project.

https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists.
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#40 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Thread drift: Coyote Union (or Coyote Ugly?)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thread drift: Coyote Union (or Coyote Ugly?)
Newsgroups: alt.folklore.computers
Date: Mon, 02 Jul 2001 03:33:56 GMT
Lars Poulsen writes:
Lynn Wheeler wrote:
>The week before was (some) school spring break and I took the kids to
>washington DC. My oldest boy was especially taken with members of the
>:San Fran Coyote union demonstrating on the steps of congress that
>week.

The term "coyote union" is not known to me, so I asked Google to search. This is the closest I could find, but I don't think this is it:


try

http://www.uri.edu/artsci/wms/hughes/catw/sanfrtf.htm
https://web.archive.org/web/20010802195402/http://www.uri.edu/artsci/wms/hughes/catw/sanfrtf.htm
http://www.aic.gov.au/publications/lcj/working/ch5-2.html
http://www.bayswan.org/streetwise.html

for a better idea.

as another aside that week was also the week (or maybe two) before the opening of the smithsonian air & space museum

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

distributed authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: distributed authentication
Newsgroups: sci.crypt
Date: Mon, 02 Jul 2001 13:32:26 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
I believe that, much like perpetum mobile, 'absolute' security (in general or in authentication) is impossible to achieve. If I physically meet a person and he shows me a passport or identity card, am I sure that the passport is not faked? Fingerprint check is not 'absolutely' reliable. And what about the issue of twins? What if he later denies our meeting? Are written signatures 'absolutely' secure? So I believe that one has to make subjective but conservative decisions of when 'sufficient' security has been attained in a given particular situation rather than (in vain) persuing perfect goals of the theoreticians, which like perfect beauty, etc. simply don't exist in this real world.

typically perfect identity are associated with issues like non-repudiation.

the reverse is to look at something like RADIUS which is a widely implemented distributed distributed authentication infrastructure, although typically deployed with passwords with lots of false authentication possibilities and see what happens if passwords were replaced with digital signature hardware tokens (not even needing certificates).

What is the difference and probability of a password-based RADIUS compared to a digital signature, hardware token-based RADIUS with respect to false authentication (1-factor authentication).

Then what is the difference and probability moving to a pin-controlled digital signature hardware token compared to a non pin-controlleed digital signature hardware token (2-factor authentication).

What are the differences between attacks and exploits on different kinds of hardware tokens.

At some point, the "problem" of false authentication starts to drop into the very small range.

Typically, businesses and other real-world entities deal in terms of risk managment. In the past, the risks have been primarily associated with "insiders" (i.e. perfectly authenticated) and not false authentication. At some point the problem of false authentication drops into a very much a background issue because various kinds of exploits dealing with perfect authentication (aka insiders) will dominate.

Now the risk issues (with insiders) start to become things like motivation, coercion, bribary, etc. Business risk tends to go after the most probable. Reducing false authentication below some threshold means that the risk issues associated with perfect authentication (insiders) start to dominate.

Myopically focusing on trying to make a single aspect absolutely perfect, probabably means that some other avenue large enough to drive a 747 through is being left open.

... misc risk threads.
https://www.garlic.com/~lynn/aadsm2.htm#strawm3 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech4 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech9 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech13 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss2 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp-00.txt))
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo3 QC Bio-info leak?
https://www.garlic.com/~lynn/aadsmore.htm#debitfraud Debit card fraud in Canada
https://www.garlic.com/~lynn/aadsmore.htm#biosigs biometrics and electronic signatures
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/aepay3.htm#riskaads AADS & RIsk Management, and Information Security Risk Management (ISRM)
https://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay3.htm#x959risk2 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay3.htm#x959risk3 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay3.htm#x959risk4 Risk Management in AA / draft X9.59

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

X.25

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.25
Newsgroups: alt.folklore.computers
Date: Mon, 02 Jul 2001 16:26:32 GMT
john.cc@nospam.europlacer.co.uk (John Carlyle-Clarke) writes:
On this subject, I worked a bit with X.25 systems and there were a few things I always wondered about them. (I use X.25 here - like most people - to refer to the whole raft of protocols that allow communication over these networks).

I hve a publication:
The X.25 Protocol and Seven Other Key CCITT Recommendations X.1, X.2, X.3, X.21, X.21bis, X.28, and X.29 ... dated 1983;

subtitle
Reprinted in a complete and unabridge form as they appear in Fascicle VIII.2 of the Yellow Book of the Consultative Committee for International Telegraph and Telphone 1981 edition.

from the x.1 opening recommendation
International User Classes of service in Public Data Networks

(Geneva, 1972; amended at Geneva, 1976 and 1980)

The establishment in various countries of public networks for data transmittion creates a need to standardize user data signalling rates, terminal operating modes, and call control signals to facilitate international networking.

Recommendations in the V Series already standardize data signalling rates for synchronous data transmission in the general telephone network and modulation rates for modems. These rates are, however, not necessarily the most sutiable for public networks devoted entirely to data transmission and this leads to the require for an additional Recommendation.


from the X.25 opening recommendation
Interface between data terminal equipment (DTE) and Data Circuit terminating equipment (DCE) for terminals operating in the packet mode on public data networks

(geneva 1976, amended at Geneve, 1980)

The establishment in various countries of public data networks providing packet-switched data transmission services creates a need to produce standards to facilitate international internetworking


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Self-Signed Certificate

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self-Signed Certificate
Newsgroups: sci.crypt
Date: Mon, 02 Jul 2001 19:24:10 GMT
Flip writes:
Hi All,

can someone please explain the difference between a Digital Certificate versus a Self-Signed Certificate.

It appears that the self-signed certificate is the root CA certificate of a hierarchy.

Can someone please correct me if this is wrong?

Also, when you have a root certificate, a manufacturer certificate and a subscriber station certificate, is the root certificate only verified for self-signed, and the manufacturer certificate contains data to verify the subscriber station certificate.

I am obviously confused on the relationship by this three tiered hierarchy is anyone wishes to help clarify.

Thank you ... Wilson


there is frequently a need to proove that the entity that claims ownership of a particular public key has the corresponding public key.

during the public key registration scenerio, the entity needs to sign with their private key some sort of credential containing the public key and then send it off for some sort of validation, registration and storing in some sort of repository.

these things are frequently in out-of-band, independent, and presumably trusted processes ... and can be part of any sort of registration process. One of the places that they can then re-appear is in the manufacturing and distribution of browsers i.e. the browser manufactur has presumably gone thru some veting process associated with the credentials that it incorporates as part of the browsers they distribute.

It is possible to build an offline infrastructure for handling trust propagation ... where these out-of-band credentials are then treated as the root certificates for some sort of offline trust propagation infrastructure. A simple two-level sceme is possible where a browser is pre-built with some number of these self-signed credentials and then is programmed to accept any certificate signed by any private key corresponding to a public key in the pre-built list. It is also possible to deploy offline trust propagation infrastructure that recurses to arbritrary depths. In such scenerios, the entity being authenticated may have to supply the certificates associated with the intermediate levels to the relying party.

It is also possible to register self-signed public key entity credentials as part of an online trust infrastructure .... aka an electronic analogy to the "signing cards" used by financial institutions ... or a public key upgrade for RADIUS password registration.

misc. postings from ssl/tls cert threads
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IA64 Rocks My World

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IA64 Rocks My World
Newsgroups: comp.arch,comp.os.vms,comp.sys.dec
Date: Mon, 02 Jul 2001 22:27:11 GMT
name99@mac.com (Maynard Handley) writes:
As I recall when RS/6000 first came out, there were three models (maybe 4). There was a mondo server type machine, a tower type machine, and a desktop type machine of much the same size and shape as a desktop PC of the time. While I was aware that the higher end machines were multiple chips, I thought those low-end desktops were single chip, but heck, I can believe it if you say they were also multichip.

Maynard


Announced early 1990

320 ... desktop 520 ... deskside 330 ... desktop 530 ... deskside 730 ... "wide" deskside w/vmebus for special graphics card 930 ... rack mount

effectively same chipset and motherboard

x20 was clocked at 20Mhz x30 was clocked at 25Mhz

then came x40s, and x50s

x50s were announced Oct. 1990 and had 41.6 Mhz clock rate

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

X.25

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.25
Newsgroups: alt.folklore.computers
Date: Tue, 03 Jul 2001 12:54:31 GMT
john.cc@nospam.europlacer.co.uk (John Carlyle-Clarke) writes:
On this subject, I worked a bit with X.25 systems and there were a few things I always wondered about them. (I use X.25 here - like most people - to refer to the whole raft of protocols that allow communication over these networks).

random piece of trivia, the first financial transaction executed by small client/server startup (in the following references) was via Sun server with serial/232 connection to x.25 pad driving a leased line.

https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Alpha/IA64 Hybrid

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Alpha/IA64 Hybrid
Newsgroups: comp.arch,comp.sys.intel,comp.os.vms,comp.unix.tru64,comp.sys.dec,comp.os.linux.alpha
Date: Tue, 03 Jul 2001 14:23:05 GMT
young_r@encompasserve.org (Rob Young) writes:
Prior to failover becoming widely available, we had a customer that had two RS/6000s. One primary, and one backup that sat in a closet. They would periodically run a fire drill by wheeling the box out of the closet and plugging it in to the storage, simulating a failed primary. That paradigm has advanced to the point whereby you can leave it plugged in and have nice Perl scripts to failover your application(s). Cute.

Rob


we had fallover announced and shipping in mid-90 on 6000 and various kinds of cluster in '92 (although it was running earlier in some sites prior to general availability).

we had both done various mainframe cluster projects in the '70s.

misc. refs:
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/98.html#40
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/2001b.html#50
https://www.garlic.com/~lynn/2001e.html#4
https://www.garlic.com/~lynn/2000c.html#45
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Alpha/IA64 Hybrid

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Alpha/IA64 Hybrid
Newsgroups: comp.arch,comp.sys.intel,comp.os.vms,comp.unix.tru64,comp.sys.dec,comp.os.linux.alpha
Date: Tue, 03 Jul 2001 21:02:08 GMT
Bob Willard writes:
VMS had fallover running in '77. Failover took a bit longer. -- Cheers, Bob

yep ... we had worked on loosely-coupled (including fall-over & fail-over) starting in '60s continuing thru the '70s. My wife was in g'burg JES group ... both JES2 and JES3 having cluster/loosely-coupled support ... and then was con'ed into going to POK to be responsible for loosely-coupled (aka cluster) architecture (there were two multi-cpu architectures ... the person responsible for SMP architecture and my wife responsible for loosely-coupled/cluster architecture and originated Peer-Coupled Shared Data architecture ... that was subsequently basis for ims hot-standby and sysplex.

misc. refs:
https://www.garlic.com/~lynn/98.html#30 Drive letters
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#77 Are mainframes relevant ??
https://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP

it took some time to start translating that to Unix platform on commodity components in the '80s.

you would really be surprised at the people that current run around esposing various clusters stuff that we had major arguments with in the late 80s and early 90s (both from standpoint of concurrent operation, scale-up, fall-over, continuous operation, etc).

as per thread in this ng in april ... we used similar semantics for HA/CMP DLI as found in VMS (because there was effectively port of some DBMS from vax cluster to ha/cmp). As per attached ... some of the DBMS vendors that had vms cluster (failover) implementations ... also had their list of "things needing fixing" which we were able to benefit from in doing the implementation for ha/cmp.

misc. refs:
https://www.garlic.com/~lynn/2001e.html#2
https://www.garlic.com/~lynn/2001e.html#4

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/subtopic.html#hone

slightly related from RAS standpoint:
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Tue, 03 Jul 2001 22:19:01 GMT
kragen@dnaco.net (Kragen Sitaker) writes:
Why was it much better? According to that article, it was still much slower than TPF.

TPF has almost no operating system.

My youngest had a job during college with an air freight forwarder. Typical TPF "res" systems not only support airline reservations ... but other types of reservations.

Various airline res systems had to be periodically be shutdown so all of their stuff could re-org the infrastructure and then bring it back online. As things have gone 7x24, it has been harder and harder to find 8hr (or larger) windows for their down time.

Also, the quote was that TPF (in part because it had very little "system") didn't have a good entry level delivery ... aka it took huge amounts of care and feeding ... so deploying an entry level system capable of doing only 300 trans/sec was nearly as expensive as deploying a 1000 trans/sec infrastructure.

TPF systems doing 3500 trans/sec or more aren't unusual.

misc. ref:
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)

random refs:
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/96.html#36 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#31 Computer of the century
https://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001.html#58 Disk drive behavior
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001d.html#74 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Alpha/IA64 Hybrid

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Alpha/IA64 Hybrid
Newsgroups: comp.arch,comp.sys.intel,comp.os.vms,comp.unix.tru64,comp.sys.dec,comp.os.linux.alpha
Date: Wed, 04 Jul 2001 03:38:36 GMT
Kilgallen@eisner.decus.org.nospam (Larry Kilgallen) writes:
So far as I know, the term was first used by VMS about 1985, for what they offered then (not materially different from today with regard to the capabilities under discussion).

Was there a prior use of "cluster" in this context ?

If not, then lesser copycat efforts are not really clusters. Greater copycat efforts are, but I have not heard of any, including Tru64. The net effect includes the operating system and the software that runs on it. Absent RMS or an equivalent, programs have to be specially written to make use of the cluster capabilities.

There is a Sun fellow here in comp.os.vms who insists that Sun has something equivalent if you just add legacy Oracle. Somehow I doubt that all utilities on Solaris access disk through legacy Oracle.


os/360 from the 60s was designed that way ... so was PARS (which morphed into ACP, the airline control program ... which eventually morphed into the current TPF ... transaction processing facility).

random pars reference
https://www.garlic.com/~lynn/99.html#24

both os/360 and PARS were operating system for 360 mainframes that had concurrent filesystem disks in the 60s. A facility that ran on top of os/360 in the '60s was ASP (which morphed into JES3 in the early 70s) which also supported multi-system processing. While HASP didn't have similar facilities, it was added to it when it morphed into JES2 ... also in the early to mid-70s.

misc. ASP reference at UCLA
https://www.garlic.com/~lynn/2000.html#77

Because of the high transaction processing rate for ACP in multiple system complex, the device level reserve/release for updates was proving cumbersome and so fine-grain locking was added to the disk (3830) controller in the early '70s ... originally for ACP. The 3830 controller was upgraded to the 3880 controller in the late '70s.

For a specific installation, I help put together possibly the largest single system image in the world (at the time) in late '77 and early '78 for the hone system ... eight mainframe multiprocessors complex all sharing large, football sized disk (DASD) farm. HONE was world-wide "field", "sales" and "customer" support. In the early '80s, the configuration was replicated in Dallas and Boulder, in part for disaster survivability (earthquake in cal.)

Dialog (a couple miles away, at the time still owned by lockheed) possibly had a larger disk (DASD) farm, but didn't have as many of the (same kind) mainframe processors.

My wife worked in the JES group in the mid-70s and then was con'ed into going to POK to be responsible for loosely-coupled (i.e. non-shared memory, cluster) multiple system architecture. There she was responsible for Peer-Coupled Shared Data architecture which was the basis for ims hot- standby and sysplex.

in fact, one of the "problems" of the os/360 genre of operating systems has been the careful propensity of commiting every thing to disk, just in case another system might need to read it from disk (and effectively tendency to not trust any caching) from the '60s.

the current incarnation on mainframe is "parallel sysplex" (can show iineage back to the mid-60s)

http://www-1.ibm.com/servers/eserver/zseries/zos/psysplex.html
http://www.research.ibm.com/journal/sj/362/nick.html

random refs:
https://www.garlic.com/~lynn/subtopic.html#hone
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Alpha/IA64 Hybrid

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Alpha/IA64 Hybrid
Newsgroups: comp.arch,comp.sys.intel,comp.os.vms,comp.unix.tru64,comp.sys.dec,comp.os.linux.alpha
Date: Wed, 04 Jul 2001 03:56:00 GMT
Anne & Lynn Wheeler writes:
os/360 from the 60s was designed that way ... so was PARS (which morphed into ACP, the airline control program ... which eventually morphed into the current TPF ... transaction processing facility).

some random tpf fluff
http://content.techweb.com/wire/story/TWB19980404S0001
http://www.blackbeard.com/tpf/tpfhist.htm
http://www.tpfug.org/Background/history.htm

some past threads
https://www.garlic.com/~lynn/96.html#29
https://www.garlic.com/~lynn/96.html#31
https://www.garlic.com/~lynn/2000f.html#20

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Alpha/IA64 Hybrid

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Alpha/IA64 Hybrid
Newsgroups: comp.arch,comp.sys.intel,comp.os.vms,comp.unix.tru64,comp.sys.dec,comp.os.linux.alpha
Date: Wed, 04 Jul 2001 04:15:10 GMT
Anne & Lynn Wheeler writes:
concurrent filesystem disks in the 60s. A facility that ran on top of os/360 in the '60s was ASP (which morphed into JES3 in the early 70s) which also supported multi-system processing. While HASP didn't have similar facilities, it was added to it when it morphed into JES2 ... also in the early to mid-70s.

.... note that while there was multi-system disk support in the base os/360 system, it took awhile for various things to evovle to take advantage (mid to late '60s)

from:
http://www.share.org/Industry_Influence_Successes.pdf
Multi­System Support in OS/360 and Follow­ons

SHARE members originally developed both local and remote multi­system support for OS/360 MFT/MVT using the HASP spooling subsystem. NIH, the National Institutes of Health, developed Multi­Access Spool, which allowed multiple OS/360­HASP systems to share workload via shared DASD and shared spool. TUCC, the Triangle Universities Computing Center, developed a Remote HASP­to­HASP facility. Eventually both facilities were incorporated by IBM into JES2, becoming JES2 MAS and NJE (Network Job Entry) (see below).

JES2 and JES3

When MVS was finally available in the mid­1970s it had two Job Entry Subsystems (spooling systems), JES2 and JES3. The system shipped with JES2 (HASP Version 5) with JES3 coming somewhat later, and JES2 was seen by IBM as being a dead­end, with significant enhancements such as multiple system support and remote support planned only for JES3. Very heavy user support for JES2 from SHARE ultimately caused IBM to continue to support and enhance both JES2 and JES3, and both local and remote multiple system support in JES2 were adopted from originally developments of SHARE installations (see above).


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Wed, 04 Jul 2001 14:10:15 GMT
kragen@dnaco.net (Kragen Sitaker) writes:
Why was it much better? According to that article, it was still much slower than TPF.

also ... some recent TPF related stuff in thread on smp and loosely-coupled/clusters
https://www.garlic.com/~lynn/2001g.html#43 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#48 The Alpha/IA64 Hybrid

for a short period before my wife was manager, 6000 engineering archicture, she served a stint as chief architect for Amadeus, the Eastern/European airline res system enhancement project.

She was backing x.25 as infrastructure deployment, and the x.25 vis-a-vis SNA became a political issue and she was replaced. It turns out the decisions was made to go with x.25 in any case (and saw pretty large deployment ... except the eastern part of it seems to be long gone).

random aside, x.25 thread running in this ng.
https://www.garlic.com/~lynn/2001g.html#29 X.25
https://www.garlic.com/~lynn/2001g.html#42 X.25

random other stuff
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Wed, 04 Jul 2001 16:37:17 GMT
jra@dorothy.msas.net (Jay R. Ashworth) writes:
And it's since gone to token over frame (which may be the same thing) and is about to transition to IP/Ethernet over frame. Viewed from out here at the pointy end, that is. Don't know the details.

remember that in addition to any "internal" infrastructure (say at airport gates) there are all the terminals at travel agencies (especially for europe and the rest of the world, reason amadeus went x.25 in the first place).

I would expect that a lot of the travel agency terminals at random locations around the world might still be x.25. There have been possibly hundreds of thousands of these terminals ... and change-over from frame to x.25 around the world wouldn't be cheap (even if frame was available in some of these places).

I recently had the opportunity to compare DSL and frame rates, and the telcos are very happy to offer frame at thirty times the charge of DSL at the same data rate.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did AT&T offer Unix to Digital Equipment in the 70s?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did AT&T offer Unix to Digital Equipment in the 70s?
Newsgroups: alt.folklore.computers
Date: Wed, 04 Jul 2001 16:55:15 GMT
Anne & Lynn Wheeler writes:
I would expect that a lot of the travel agency terminals at random locations around the world might still be x.25. There have been possibly hundreds of thousands of these terminals ... and change-over from frame to x.25 around the world wouldn't be cheap (even if frame was available in some of these places).

oops, finger slip ... that should have read .. and change-over from x.25 to frame around the world wouldn't be cheap ^^^^^^^^^^^^

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Compaq kills Alpha

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Compaq kills Alpha
Newsgroups: comp.sys.dec,comp.unix.tru64,comp.arch
Date: Sat, 07 Jul 2001 18:07:13 GMT
"Bill Todd" writes:
And over time one of those new ideas was that since it did not appear possible to always be able to recover, separating highly-available operation from the SMP model might make more sense (since if you have two machines with independent failure modes, you can always recover) than continuing to increase the base system complexity (in making it monitor its internal function at increasingly fine levels of detail) in an effort to make the unrecoverable error set smaller and smaller.

That separation took (at least) two forms. One was clustering, and the other lock-step hardware redundancy. Clustering lets you make essentially full use the redundant hardware (and can operate with extremely low inter-node latencies in single, partitioned boxes) but still leaves open the possibility that an undetected error will escape to do damage before it's noticed, while lock-step-style redundancy makes no use of the redundant hardware for anything but its effect on reliability/availability when such expense is justified.


I know that IDC service bureau was doing form of clustering between San Fran and Waltham (& within waltham) circa mid to late '70s (that included process migration between san fran and waltham in support of 7x24 service around the world ... where either san fran or waltham could be taken down and service continued).

from recent thread in alt.folklore.computers

but as per following ... CP/67 on s360/67 would "fast" reboot. the following "bug" (27 reboots in a single day) was a combination of mine and his. I had done the tty/ascii support while undergraduate and IBM had incorporated into product and shipped to customers. I had essentially done 1 byte arithmetic on figuring lengths (256 bytes). I believe they added support for a tty "plotter" that had long line lengths (certainly longer than supported by standard 33/35) and modified some of the code (but not all) the code accordingly.

http://www.multicians.org/thvv/360-67.html

note the mention of one of the reasons for multics filesystem rewrite .. since it was taking hour or so to do a "fsck".

as an aside ... from another thread ... Interactive Data Corporation "service" (in the above) was doing process migration on vm/370 (in early '70s), across a cluster of machines that included several in waltham and one in san francisco (this, in part allowed for transparently taking machines out of service for preventive maintenance ... while offering 7x24 world-wide access; aka taking san fran machine down ... and all processes & online users sessions migrating to waltham).

random refs:
https://www.garlic.com/~lynn/97.html#14
https://www.garlic.com/~lynn/99.html#10
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

S/370 PC board

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: S/370 PC board
Newsgroups: bit.listserv.ibm-main
Date: Mon, 09 Jul 2001 21:56:06 GMT
jfregus@IX.NETCOM.COM (John F. Regus) writes:
Does anyone remember the S/370 emulation board that you could put in an INTEL pc and be able to run VM and if I am not mistaken there was even a cut down version of MVS that you could run on the machine.

Is this board still available? Do you know what it costs? And what about the cut down version of MVS, and its costs?

Any info would be helpful.

Thanks,


random refs:
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

DSRunoff; was Re: TECO Critique

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DSRunoff; was Re: TECO Critique
Newsgroups: alt.folklore.computers
Date: Tue, 10 Jul 2001 06:27:54 GMT
Brian Inglis writes:
I was still using the similar IBM SCRIPT product in 1992, having recently customized it to deal with Xerox laser printers as if they were IBM 38XX laser printers with more options (saw Xerox salesman driving Jag later that year).

IBM SCRIPT (with "runoff" like dot commands) was done about '67 by stu madnick for cambridge science center. "G", "M", & "L" (at the science center) added GML to IBM SCRIPT in the early '70s. One of the big GML/IBM accounts (since the early '70s was CERN ... so one could claim that it also begate HTML, XML, etc. (in addition to SGML the ISO standard) .. aka the "ML" of the HTML genre dates back 30 years.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Using a self-signed certificate on a private network

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using a self-signed certificate on a private network
Newsgroups: comp.infosystems.www.misc,comp.security.misc
Date: Sat, 14 Jul 2001 08:41:05 GMT
bjarne_tasermuit@yahoo.com (Bjårne Tasermuit) writes:
Hi,

I want to be able to communicate via HTTPS with a box on a private (i.e., not connected to the public Internet) network. I can't assume that there will be a CA on the network, so I'm going to have the box act as its own CA.

What are the security implications of doing the authentication this way? My main concern is that the HTTPS traffic is kept private, with the assumption that no other machines (such as DNS servers) are compromised. Would this goal be hampered by having the box authenticate itself? What other risks would this approach create?

Any pointers to good online materials covering this sort of material would be appreciated.


the thing that an SSL domain name server certificate does is the client checking that the domain name specified by the client is the same as the domain name specified in the server certificate.

this handles various integrity issues in the domain name infrastructure having to do with resolving to the wrong ip address.

Note, as pointed out repeatedly, the domain name infrastructure is the authoritative agency for domain name ownership ... aka when somebody applies for a server domain name certificate ... the certification authority has to check with the authoritative agency for the information being certified ... aka the certification authority contacts the domain name infrastructure to verify that the entity requesting a SSL server domain name certificate is the entity that owns that domain name. However, this is the same domain name infrastructure that got everybody worried about its integrity leading to having SSL domain name server certificates in the first place.

There are some proposals (in part by certification authorities) which would improve the integirty & structure of the domain name infrastructure (such that certification authorities could better rely on its integrity), however, note that such integrity improvements would actually improve the overall integrity of the domain name infrastructure ... not just for its use by certification authorities ... but by everybody.

in a closed environment ... there should be much less an issue with regard to the integrity of the domain name infrastructure ... as well as trust in the corresponding (self-signing) certification authority.

In effect, for a closed-environment ... the main function during the SSL protocol negotiation is the key-exchange part ... the part where the client checks the domain name in the certificate against the domain name the client used to access the server ... should be very perfunctionary.

random refs:
https://www.garlic.com/~lynn/2001g.html#2
https://www.garlic.com/~lynn/2001g.html#10
https://www.garlic.com/~lynn/2001g.html#16
https://www.garlic.com/~lynn/2001g.html#17
https://www.garlic.com/~lynn/2001g.html#19
https://www.garlic.com/~lynn/2001g.html#21
https://www.garlic.com/~lynn/2001g.html#25
https://www.garlic.com/~lynn/2001g.html#31
https://www.garlic.com/~lynn/2001g.html#40
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

YKYBHTLW....

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: YKYBHTLW....
Newsgroups: alt.folklore.computers
Date: Sat, 14 Jul 2001 21:25:24 GMT
bhk@dsl.co.uk (Brian {Hamilton Kelly}) writes:
That's a load of bollocks (if you'll pardon my French). The BCS was formed before I started working for a living in this field (which was 1964). In 1966, I used to be in awe when I regularly travelled up in the lift[1] with Stanley Gill, who was one of the first FBCS. I have a feeling that the BCS goes back right into the 1950s, with Maurice Wilkes and his peers being some of the first member.

My boss at that time (1966), Harry Baecker, also was made an FBCS.

Or (errm) are we not talking about the British Computer Society?

[1] Or had coffee bought for me in the canteen by, or whatever.


BCS or boeing computing services was formed under the head of boeing corporate data processing in '68 and they then started moving various boeing data processing entities under his control (places like boeing renton data center, etc).

The formation of BCS was, in part, to allow an independent company to offer data processing services to entities outside of the boeing computing corporation (as well as independently bid on various data processing contracts).

I believe there was some amount of political infighting with the move of various boeing data processing centers into BCS under the (former) head of corporate data processing. The head of corporate data processing in the main buidling off boeing field was a relatively small operation responsible for payroll and other items ... and was dwarfed by even the boeing renton data center (by possibly a factor of 100 to 200). I believe it actually took a year or more to move all of the data processing entities and operations into BCS.

I believe some comparison at the time (in very late '68/early 69) when BCS was formed ... corporate data processing had a single 360m30 (for doing things like payroll) and just renton data center alone had over $200m in 360m65s (plus some number of 360m75s, other IBM gear as well other as non-IBM data processing equipment).

BCS then expanded into many places offering computing services ... In the past couple yeas I remember seeing BCS buildings in places from Tysons Corner to Pena blvd/I70 (i.e. near new denver airport).

I believe I saw within the last year or so the statement that BCS was sold (to SAIC?).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Q: Internet banking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q: Internet banking
Newsgroups: sci.crypt
Date: Sat, 14 Jul 2001 22:03:34 GMT
jan@panteltje.demon.nl (Jan Panteltje) writes:
It is quite new (about a month now). Just received a folder that they intent to stop with the old 'homebanking' system (that used a calculator in the way you referred to, dial in via a modem, and only keep the new Internet system. I am not sure I am happy with that, as preparing things off line was nice, and having all the transactions on hard disk / being able to do analysis on the stock performance and all that. Maybe they are just fishing for reactions. Or maybe there will be different software for that in the future. Regards Jan

slightly unrelated .. but i was at the company that makes the devices on tuesday (halfway between amsterdam and brussels).

it is about the size and looks of a small calculator ... but actually is a portable chipcard reader with keypad and display. The bank chipcard is put into the reader and the chipcard PIN entered (to activate the card).

Then the "challenge" is entered on the keypad (challenge may be either from a PC display ... or even audio over a telephone) ... the chipcard generates the "response" with DES (or possibly 3DES?) which is then displayed. The user enters the response into the PC (say internet browser form) or even on a telephone keypad (in the case of a purely audio phone transaction).

Even more unrelated, I was at the finread conference in brussels on wed.

From one of the handouts:
The FINREAD card reader can be considered as a "universal" PC peripheral device for an Internet user.

In this way the reader can be used for a wide variety of applications such as

reloading an electronic purse home banking e-trading applications which require an Advanced Electronic Signature as defined in the European Directive


additional info

http://www.cenorm.be/isss/workshop/finread/

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

TECO Critique

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TECO Critique
Newsgroups: alt.folklore.computers
Date: Mon, 16 Jul 2001 07:06:08 GMT
Brian Inglis writes:
If you can assume that the OS is going to panic less than once a year, and you only shut the machine down for (planned or unplanned) power outages, hardware changes or failures, taking 45 minutes for a firmware POST check to ensure that all the hardware is still fully functional is not a large issue in the great scheme of things.

when we looked at supporting the 1-800 number system ... it had/has a requirement of 5 minutes of outage per year.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Tue, 17 Jul 2001 06:15:22 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
You seem to ignore the experts on graphology, whose testinomy the court generally rely on. (The judges' decision could err, of course, as we all know.)

frequently the issue is who has the burden of proof and/or the standard of proof

various things can shift the burden of proof from one party to another (and/or change the standard of proof) in litigation &/or dispute scenerios.

one of the issues with regard to implementing X9 (and ISO TC68) Financial Standards ... has to do with whether the financial institution shows that it followed standards ... or proovex that a non-standard implementation is equivalent to a standard.

I believe there was a case like this in germany in the past year or so where a financial institution continued to use some standard that had been withdrawn ... which required a significantly higher standard of proof on their part ... which they weren't able to achieve.

I believe Jane Winn has written some quite good articles on technology and its relationship to the law.


http://www.smu.edu/~jwinn/ NOTE moved to
http://www.law.washington.edu/Faculty/Winn/

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Tue, 17 Jul 2001 19:56:38 GMT
"Lyalc" writes:
I disagree, somewhat. The author contended that electronic signatures could have a status in law, but that PKI/digital signatures and existing implementations do not yet provide a basis for that to occur.

a good part of CA and certificates have nothing at all to do with digital signatures and can they put on the same footing with paper signatures (in fact, much of CA and certificate issues can be totally ignored).

much of digital signatures (as some of the stuff having to do with finread in europe) have to do with has the person really signed what they thought they signed. it is somewhat more straight forward that a signature on a piece of paper that defines something has a much more straight-forward relationship modulo questions did the person sign a blank piece of paper. it is much, much more difficult to show that a mathematical calculation applied to some bits inside a computer have anything at all to do with a person's "intention".

european union is trying to address ome of these issues in the finread standard

>From one of the handouts (at the finread conference in brussels last week)
The FINREAD card reader can be considered as a "universal" PC peripheral device for an Internet user.

In this way the reader can be used for a wide variety of applications such as

reloading an electronic purse
home banking
e-trading
applications which require an Advanced Electronic Signature as defined in
the European Directive


additional info

http://www.cenorm.be/isss/workshop/finread/
https://web.archive.org/web/20020214184739/http://www.cenorm.be/isss/workshop/finread/

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Wed, 18 Jul 2001 07:00:45 GMT
Anne & Lynn Wheeler writes:
From one of the handouts (at the finread conference in brussels last week)

The FINREAD card reader can be considered as a "universal" PC peripheral device for an Internet user.

In this way the reader can be used for a wide variety of applications such as

reloading an electronic purse home banking e-trading applications which require an Advanced Electronic Signature as defined in the European Directive

additional info


http://www.cenorm.be/isss/workshop/finread/
https://web.archive.org/web/20020214184739/http://www.cenorm.be/isss/workshop/finread/


other threads from the past
https://www.garlic.com/~lynn/aadsm3.htm#cstech4 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#ocrp Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
https://www.garlic.com/~lynn/aadsm6.htm#nonreput Sender and receiver non-repudiation
https://www.garlic.com/~lynn/aadsm6.htm#nonreput2 Sender and receiver non-repudiation
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmore.htm#keytext proposed key usage text
https://www.garlic.com/~lynn/ansiepay.htm#x959bai X9.59/AADS announcement at BAI
https://www.garlic.com/~lynn/99.html#224 X9.59/AADS announcement at BAI this week
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#40 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#41 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#43 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#44 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#46 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#50 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#51 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#52 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#54 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#59 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#72 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#73 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#41 solicit advice on purchase of digital certificate
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#31 Remove the name from credit cards!
https://www.garlic.com/~lynn/2001g.html#1 distributed authentication
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Fri, 20 Jul 2001 12:05:29 GMT
pgut001@cs.auckland.ac.nz (Peter Gutmann) writes:
This really doesn't buy you much with current smart cards. Instead of having the signature generated by an untrusted machine, you have the signature generated by a card which will do anything an untrusted machine tells it to. In my taxonomy of crypto coprocessors I rate smart cards as only slightly better than nothing at all:

that is part of the reason behind the finread (and other similar) "reader" projects.

note that the issue of certificates for public key distribution is almost totally orthogonal to the issue of whether or not you can proove that a person intended to sign what was signed.

The act of taking a electronic bit string, digitally signing the electronic bit string and transmitting the combination of electronic bit string and the digital signature ... with regard to the person's "intention" is totally unaffected whether or not they also append a manufactured public key certificate to the combination message+signature

The mechanics of intention are unrelated to whether or not somebody sent off in the mail for their bonified, captain midnight, magic public key certificate.

the fraud scenerios having to do with intention are (at least)

1) did the person execute some action that directly resulted in the digital signature (and no signature can be generated w/o that action)

2) was the electronic bit string that was digital signed the same bit string that was perceived by the person and which they intended to sign (i.e. person perceives some electronic bit string and intends to sign that exact bit string, was what got signed, what the percson perceived).

aka ... simple PC+card implementations can have the PC requesting (additional) signatures that were not intended and/or the PC modifying the contents of what the person intended to sign.

however, from a risk analysis stand-point ... one-factor authentication (something you have) for fraud reduction in conjunction with authenticated transactions is better than non-authenticated transactions (i.e. introducing a virus into a specific PC for fraudulent generation of digital signed transactions is slightly more difficult than generating straight unauthenticated transactions).

fraud does tend to go after the weakest link

1) digital signed authenticated transactions are better than unauthenticated transactions 2) hardware tokens where the private key is never divulged is better than software private keys 3) secure signing environment is better than an insecure signing environment 4) secure signing environment that is guaranteed to require specific human action as part of being able to show intention 5) secure signing environment that is guaranteed to exactly present to a person what they proceed to sign as part of intention

even with all of the above ... litigation would still be who has the burden of proof as well as the standards of proof for the parties (when a signature comes into dispute).

slightly related thread in crypto & financial standard mailing list

https://www.garlic.com/~lynn/aepay7.htm#nonrep0 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep1 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep2 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep3 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep4 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep6 non-repudiation, was Re: crypto flaw in secure mail standards

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Fri, 20 Jul 2001 12:45:04 GMT
Anne & Lynn Wheeler writes:
however, from a risk analysis stand-point ... one-factor authentication (something you have) for fraud reduction in conjunction with authenticated transactions is better than non-authenticated transactions (i.e. introducing a virus into a specific PC for fraudulent generation of digital signed transactions is slightly more difficult than generating straight unauthenticated transactions).

note that a digital signature "authenticated" transaction/message infrastructure does allow for some amount of personal choice.

faced with an "unauthenticated" transaction/message infrastructure, there may be little personal control over the integrity issues ... since the simplest fraud can involve exploits on things the individual has little or no control over (i.e. whether or not there is credit card account-number leakage/harvesting off a merchant's web site).

Given strongly authenticated digital signature transactions that eliminate exploits where credit card account-number leakage/harvesting from a merchant's web being used in fraudulent transactions ... then the individual can choose to exercise some discretion over the components under their control that might be the next likely "weakest" link for fraudulent exploits.

This can involve choices about PC and some discretion in the software loaded on the PC.

Whether or not a hardware token is used for the digital signature.

The quality of the hardware token.

The quality of the digital signature signing environment.

It will never be possible to make all people exercise the same level of discretion in all aspects of their life. However, a reasonably robust authentication infrastructure could, at least, move the most likely fraud & exploit targets to things that the individual might have some discretionary control over (like their own hardware token, PC, and digital signing environment).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Sat, 21 Jul 2001 01:20:19 GMT
"Lyalc" writes:
Taking this a little further, a relying party wants to mak e a decision based upon a signed message now. Not after a CRL has been checked. Not 24 hours later after the singre has a 'cool off period' Now.

How does the relying party know if the signature came from a Tier 3 or preferably a Tier 4 device (and thus have higher confidence that 'intent' was accurately captured at the time of signing), compared to the less confidence inspiring Tier 1 and 2 approaches? X.509 is silent on this. PKCS, PEM, S/MIME, etc are all silent on this.

The only means today of achieving Relying party confidence is for certificate use to buys the same product from a single supplier. Clearly, this is impractical for several reasons.

The remaining problems in PKI now stem from being able to provide confidence to both the relying party and the signing party. We aren't here yet. Get over it.


actually X9A10 working group spent quite a bit of time regarding the "digital signing environment" also sign the transactions. it is one of the reasons that X9.59 is somewhat silent on the number of signatures that might be involved in a x9.59 payment standard.

the straight forward is the consumers signature ... but to support the echeck "co-signers" function ... there may be one or more additional co-signers signatures.

but there may also be a signature of the device/environment in which the consumer's signature took place (aka a finread type device ... but also signing the transaction to proove it was used).

however, as noted numerous times before all of the above public keys have been registered as part of the normal business processes (business processes that currently exist today modulo misc. technology issues) ... and certificates (and the rest of the CRL, etc infrastructure) are redundant and superfluous.

certificates were "invented" for the offline world for previously unknown parties that have had no previous business relationship ... i.e. pasted on the end of a signed email that travels in a store&forward environment and then is download and read offline.

the certificate-less PKI approaches for the most part map trusted & strong authentication technology onto mostly existing "online" business & legal infrastructures w/o having to introduce totally extraneous parties to the transactions.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Sat, 21 Jul 2001 13:15:36 GMT
"Harris Georgiou" writes:
There are two separate situations where the certificate gets separated from it's owner: either the it is stolen (copied), or the owner willingly hands it to someone else. The first case is solved but using a large scale distributed storage & key management service. THIS is the real challenge. The second one is addressed by explicitly warning the person about his/hers responsibility regarding the responsibility of secrecy and privacy. After that, as the only case of key propagation is by the wish of that person and no other way, it's his fault if the key gets abused by a previously-trusted friend or employee and is the onw to take the blame. As simple as that. If someone is not responsible enough not to loan something so personal as his unique digital ID, he is already commiting a crime as bad as forging his normal police ID.

there is a giant distinction between a certificate and a private key (of a public/private key pair).

in some german financial situations they went to relying-party-only certificates because of enormous privacy and liability issues related to x.509 identity certificate.

in this scenerio, the private key owner presents to the bank the public key in a message signed with the private key (to proove they own the corresponding private key). the bank then validates some information and generates a relying-party-only certificate that (effectively) only contains the public key and an account number.

The original of this certificate they store in the account-owner's account record and they transmit a copy of the certificate back to the key-pair owner. Already there is a separation of the certificate since the key-pari owner only gets back an electronic copy of the original certificate ... while the electronic original is stored in the account-record repository.

The key-pair owner, wishing to interact with their financial institution generates some form of electronic message (internet banking login, financial transactions ... say like x9.59, etc) and then digitally signs the message. The message, the digital signature, and a copy of their copy of the certificate are then bundeled up and sent off (possibly directly to the bank, or in the case of financial transaction, indirectly to a merchant, which eventually finds itself to the appropriate financial institution).

One of the important issues here is that almost any form of X.509 identity certificate can represent a serious privacy violation, especially in any form of consumer retail transactions. That is one of the reasons that the financial institutions went to relying-party-only certificates.

Another, interesting point comes out of the financial standards body work on certificate compression (the size of the smallest current generation of certificates tend to be at least an order of magnitude larger than existing financial messages). It was found that

1) fields that the relying-party already posses can be eliminated from a certificate

2) certificates that the relying-party already has can be eliminated from the transaction.

Now, it is trivial to show that

1) if the relying-party posseses the original of the certificate then all fields in the copy of the certificate (in the possesion of the key-pair owner) are already in the possesion of the relying-party; so all fields can be compressed resulting in a zero-byte certificate. To further optimize the process, since the relying-party with the original of the certificate knows that it will only be getting back zero-byte compressed certificates, the relying-party when it initially transmits back the certificate copy to the key-pair owner can pre-compress the certificate to zero-bytes.

2) if the relying-party posseses the original of the certificate, then for the key-pair owner to transmit back an electronic copy of a copy to the relying-party (by appending it to the transaction after the digital signature) is redundant and superfluous.

as to other references to work on prooving the integrity level of the digital signing environment there is some recent reference to the work in the x9a10 standards working group several years ago about having the digital signing environment also sign the transaction

random ref:
https://www.garlic.com/~lynn/aadsm6.htm#echeck
https://www.garlic.com/~lynn/subpubkey.html#privacy

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

[OT] Root Beer (was YKYBHTLW....)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Root Beer (was YKYBHTLW....)
Newsgroups: alt.folklore.computers
Date: Sat, 21 Jul 2001 12:50:03 GMT
Ric Werme writes:
In the late 60s, CMU, MIT, and Stanford all independently discovered that programming went better with Coke. The area around CMU had one Chinese restaurant and I didn't start going there until near the time I left for DEC. There were several chinese restaurants in eastern Mass back then (and a Chinatown in Boston) and I quickly realized that my career was incomplete without Hot and Sour soup.

there was also this little seafood market in inman(sp?) sq ... with tables and benches upstairs for lunch-room crowd. after i left the area, it branched out into seafood restaurant business and opened a couple additional places ... there is one a lot closer now in kendell sq (in the same bldg. with the mit coop).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Installing Fortran

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Installing Fortran
Newsgroups: alt.folklore.computers
Date: Sat, 21 Jul 2001 18:01:41 GMT
jsaum@world.std.com (Jim Saum) writes:
The F-level assembler for OS/360 would make multiple passes over the source when it was short of memory. I assume some other 360 assemblers (DOS version of F-level, DOS D-level) did the same, but I have no doc.

there is some folklore that original os/360 assembler ... one of the implementers was told that they only had only 256 bytes for op-code lookup (program and data) ... and so they stored the table on disk and there was (at least) one disk read for every statement processed. This was regardless of the amount of memory actually available. This was later fixed ... but was characteristic of some of the early teething problems.

I have distinct memories of a 2000 card assembler program that I had written in the period and was assembling on a 64kbyte 360m30. If i did my own I/O, the program would assemble in something like 10 minutes. If I used os/360 i/o and had DCB macro ... you could watch the lights on the machine front panel when it hit a DCB macro. I would five DCB macros and the machine had a very distinct light pattern for six minutes per DCB macro (i.e. the five DCB macros added 30 minutes to the elapsed assembly time).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI/Digital signature doesn't work

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI/Digital signature doesn't work
Newsgroups: sci.crypt
Date: Sun, 22 Jul 2001 12:28:33 GMT
"Harris Georgiou" writes:
Not really. The whole idea behind the digital certificate is possesion of some piece of information unique to that person. Any digital certificate contains a key pair (complete/owner form) or just the public part of it in order to be used for verification by others. We often use the term certificate only for the public part because that is what everybody else uses to identify that person, but the complete for of it (available only to the owner) is the whole key pair. relying-party-only certificates are not mandatory to ensure the validity of a certificate. PGP keyservers keep all keys in public access and everybody can grant or revoke his/hers expression of trust towards any key by signing with the personal key. It is a distributed service, thus more reliable than any centralized certificate management.

well i was at a presentation where the guy responsible for the deployment for one of the banks in germany said that the reason that they did relying-party-only certificates was because of the enormous liability and privacy issues associated with x.509 identity certificates.

the whole idea of registration authority ... i.e. registering your public key and prooving that you own the corresponding private key .. is to show that the party registering the key has possession of the public/private key pair. in the case of the relying-party-only certificate ... the registration authority is also the relying-party. Since the entity possesing the public/private key pair has proven to the relying-party that they posses the public/private key pair; having a certificate is redundant and superfluous.

the primary purpose of the certificate is to "proove" to some random other, unknown parties that at least some trusted ("3rd") party has executed some "proof" process sometime in the past (aka the registration process or RA of the typical certification authority infrastructure).

In the case of a relying-party-only certificate, the relying-party is the registration authority and has done its own proofing as part of the reigstration. Also, issuing a relying-party-only certificate so that it can proove to itself that it has done some proofing in the past is redundant and superfluous. They have the proofing recorded in their own online records ... presentation of a certificate that has been manufactured at some time in the past is much lower quality information than the information that they have with online, real-time account information.

The real-time online account record serves as both the proof that the registration authority process (key-pair possesing entity has proven that they posses the key-pair) as well as real-time information regarding any status information regarding that entity.

A certificate manufactured some time in the past can be terribly stale information compared to any real-time information that they have recorded with regard to the key-par owning entity. So not does the certificate represent redundant and superfluous information (for the combined relying-party & registration party) prooving to itself that it executed the appropriate procedures; but it potentially represents an severe integrity risk compared to directly using the real-time information.

For instance when accepting a check, would a merchant prefer to know that there was sufficient funds in the bank account as of some date a year in the past or would a merchant prefer to know that there are sufficient funds in the bank account at this very instant? If a bank processing a customer's check ... would they process it based on year-old stale information as to the account balance or would they process it based on current real-time information as to the account balance.

A relying-party-only operation which has done its own proofing and maintains its own real-time status and other real-time information ... for what reason would it ever refer to potentially, year-old stale information that is part of a certificate manufactured at some time far in the past.

The purpose of the certificate was for use by totally, unknown other relying parties as to the proofing performed by the registration authority. The analogy is the bank letters of credit issued in the days of sailing ships before there was any electronic online ways of checking financial status. One could get a letter of credit from a bank in europe and take a sailing ship to south america and possibly conduct some financial transactions based on the trust in the letter of credit document. This was superior to having no information what-so-ever. However, real-time, online information is far superior to stale information contained in a certificate manufactured at some time in the past.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/




next, previous, index - home