Misc AADS & X9.59 Discussions


AADS Postings and Posting Index,
next, previous - home



OCSP and LDAP
OCSP and LDAP
OCSP value proposition
OCSP and LDAP
OCSP and LDAP
OCSP and LDAP
OCSP and LDAP
OCSP and LDAP
OCSP and LDAP
OCSP and LDAP
X.500, LDAP Considered harmful Was: OCSP/LDAP
Kansas kicks of satewide PKI project
Antwort: Re: Real-time Certificate Status Facility for OCSP - (RTCS)
A challenge
A challenge (addenda)
A challenge
A challenge
A challenge
A challenge
A challenge
surrogate/agent addenda (long)
A challenge
Encryption of data in smart cards
Certificate Policies (was Re: Trivial PKI Question)
Encryption of data in smart cards
Certificate Policies (addenda)
How effective is open source crypto?
How effective is open source crypto?
How effective is open source crypto? (addenda)
How effective is open source crypto? (bad form)
How effective is open source crypto? (aads addenda)
How effective is open source crypto? (bad form)
How effective is open source crypto? (bad form)
How effective is open source crypto? (bad form)
How effective is open source crypto? (bad form)
How effective is open source crypto? (bad form)
How effective is open source crypto? (bad form)
How effective is open source crypto?
The case against directories


OCSP and LDAP

Refed: **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 07:52 AM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: ambarish@xxxxxxxx, ietf-pkix@xxxxxxxx, madwolf@xxxxxxxx,
    "Peter Gutmann" <pgut001@xxxxxxxx>
Subject: Re: OCSP and LDAP
but as been discussed in other venues ... it is not only the cost ... but also who pays (how does the money flow). does a consumer pay for checking a merchant's certificate as part of access the merchant's website .... which might otherwise be free access? A merchant (as the relying party) might pay when checking the status of a consumer's certificate .... but does a consumer (as the relying party) pay when checking the status of a merchant's certificate.. Also ... as in other merchant/consumer e-commerce discussions .... a merchant's interest in the status (real-time or not) of a consumer's certificate is only incidental to whether the bank says that the merchant gets paid.

now the other business flow as a certificate, offline, stale paradigm is pushed towards an online, real-time model .... there is a question of what point does the paradigm switch. In the original offline credit-card paradigm .... the transition to online, real-time .... bypassed the real-time checking on whether the offline, stale credential was still good ... and/or whether the stale assertions in the offline credential was still good .... but whether the real-time assertions were valid.

The model in the certificate ... is that there are some assertions that are inserted into a stale, static certificate at some time in the past .... and for OCSP ... you do real-time checks to see if the stale, static past assertions still hold. The model that credit-cards went to .... was doing real-time checks on real-time assertions ... not real-time checks on real-time stale, static assertions.

The distinction is that the payment card paradigm in moving to online ... bypassed the intermediate paradigm of real-time checks on past, stale, historic, static assertions (contined in the certificate) .... and went directly to real-time checks on current, real-time assertions, aka the credit-card industry in transitioning to online .... could have continued to preserve the offline paradigm with real time checks (like OCSP does for certificates) .... which is equivalent to a real-time check to see if the consumer still has a valid account. However, the payment card industry in transitioning to online discovered that they could significantly leverage the utility of having real-time, online checking .... that instead of having real-time, online checking of stale, static information .... significantly increase the utility of having real-time checking of real-time information.

So the credit-card industry skipped the OCSP-analogous step of having a real-time infrastructure for checking of stale, static data (aka does an account still exist), and significantly improved the utility of having an online, real-time infrastructure ... and performing real-time checking of what the merchant is really interested in .... will they get paid. An issue is whether the value of having a real-time online infrastructure is significantly depreciated if it is just being applied to checking status of stale, static information .... when for effectively the same infrastructure costs .... it can be used for real-time, online checking of real-time dynamic information (under the assumption that real-time checking of real-time dynamic information tends to have more value than real-time checking of stale, static information).

... i believe that the charge/cost at supermarket check-out for debit card transaction doing real-time checking for sufficient funds for the transaction (rather than just checking if the account still exists) as well as scheduling the transaction .... the charge/cost for both/combined ... is on the order of your projected lookup costs.

If the online, real-time validation of real-time dynamic assertion (rather the real-time validation of stale, static assertion) can be bundled with the actual execution of real transaction .... and be bundled for essentially the same cost of doing just online, real-time lookup of stale, static data .... then it would imply that stale, static data paradigm would be somewhat redundant and superfluous in an online, real-time environment.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

anders.rundgren@xxxxxxxx on 1/5/2002 2:44 am wrote:
I agree with Peter.

I don't think OCSP in a not so distant future have to be more technically costly than accessing a web-page. Including a signed answer.

Some banks in Sweden believe 20 cents/lookup is a reasonable fee as it is "comparable to putting a stamp on a letter".

Personally I don't think the VA-business model has much future as it complicates they way parties interact. It essentially requires two or three global VA-networks in the world to function and that seems very unlikely to happen. It feels like the VA business model is crafted according to the lines of credit-card authorizations, but that is a rather different type of business IMHO.

Pardon the slightly orthogonal input, but business & technology do have rather interesting connections...

Anders


OCSP and LDAP

Refed: **, - **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 02:01 PM
To: "Ambarish Malpani" <ambarish@xxxxxxxx>
cc: "IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: RE: OCSP and LDAP
the certificate typically contains some assertions (age, address, name, address, affiliation, etc) .... OCSP & CRLs are about the degree of possible staleness of the assertions/information contianed in the certificate .... not actual information itself

because the information is certified in the certificate ... at some time in the past .... by definition ... you aren't going to see dynamic information .... information in a certificate is about static information as of the time the certificate is created. OCSP & CRLs doesn't represent actual information .... it represents information about possibly how stale the static information is ... not the static information itself.

so a characteristic of the certificate offline paradigm is static information as of some point in the past. Online, real-time OCSP ... doesn't provide current &/or dynamic information .... it just provides some indication of how stale the static information in the certificate is.

An online, real-time infrastructure has the opportunity to transition to providing online, real-time, current, & dynamic information .... not limiting itself as to an opinion as to the degree of staleness contained in the static certificate.

The business issue .... is that going to the trouble & expensive of having an online, real-time infrastructure .... can be leveraged to provide real-time, online (& dynamic) information.

So as in some other scenario (non payment card) .... was the issue of gov. granting business licenses of various kinds. The brain-dead translation is to give them a gov. signed certificate representing that offline, paper license. Now people that want timely information in the non-electronic world .... go to online, real-time ... by calling and/or visiting the various agencies (better business bureau, local gov. office, etc) and check on the status of the license. Actually what most people do when doing real-time checks on a business .... isn't just that the business license is still valid ... but how many complaints, what kind of complaints, what kind of recommendations, disposition of complaints, any fines, etc. If they are going to all the trouble of having a real-time check .... they aren't after the OCSP version .... if they are going to all the trouble of a real-time check ... they want the real-time, dynamic data version of the informattion (not the old fashion, offline, static & possibly stale version of the data).

My assertions has not been that certificates (offline, stale, static information) are useless .... my assertions have been that if you are going to the trouble of having a real-time, online infrastructure .... that the value of that real-time online infrastructure is significantly enhanced by offering higher value information .... like real-time dynamic information. It isn't limited to the payment industry (lets say all electronic commerce) or licensing (all gov. sanction activities) .... I claim that for nearly all certification scenarios involving online, real-time ... the infrastructure goes to the trouble of having real-time dynamic data.

Lets take another example ... driver's license. If you get stopped .... the typical real-time, online response isn't about whether the license is revoked or not (that is trivial subset) .... it is how many traffic citations/warrents are outstanding .... number of parking tickets, and potentially some number of other pieces of real-time, dynamic information.

The issue isn't whether offline, stale, static certified infomration is useless. The issue is that in going to the trouble of having online, real-time facilities .... there is the ability and the value proposition to support online, teal-time dynamic information ... rather than offline, stale, static information.

All instances that I can think of where somebody is going to the trouble of some real-time, online checking .... they are getting real-time dynamic information ... not just a simple opinion about the possible degree of staleness of static, offline information.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

on 01/05/2003 12:15 PM wrote:
Hi Lynn,
Not sure why you associate OCSP with stale information. The responder can have as current information as you choose to provide it.

Once again, I believe it makes sense to have the interaction between the CA and the VA be CRLs. If there is more current information you have (than is present in the CRL), it makes sense to have that information at the VA for use until a new CRL is produced.

Ambarish

P.S. We have also had people use their OCSP responder to provide more than just certificate revocation information (eg. payment authorization) using extensions to OCSP.

---------------------------------------------------------------------
Ambarish Malpani 650.759.9045
Malpani Consulting Services ambarish@xxxxxxxx
http://www.malpani.biz


OCSP value proposition

Refed: **, - **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 02:33 PM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: ambarish@xxxxxxxx, ietf-pkix@xxxxxxxx, madwolf@xxxxxxxx,
"Peter Gutmann" <pgut001@xxxxxxxx>
Subject: OCSP value proposition
ok, i'm a RP sitting here with a credential that contains some certified stale, static information .... that was designed to support an offline paradigm.

The RP believes there is some business/value operation involved which has some risk issues with the staleness of the information in the certificate.

A OCSP transaction provides the RP with some feeling as to the degree of staleness .... in theory to better mitigate risks associated with the value operation.

My assertions are

1) a online transaction can provide real-time, fresh, dynamic, and aggregated information (which is a superset of the stale, static information contained in the certificate) for approximately the same cost as a transaction about the staleness of the static certificate information. furthermore nearly every business/value operation in existence has some form of real-time, fresh, dynamic and aggregated information (for those mired in certificate paradigm ... view the online, real-time response containing this information as a one-time, immediately expiring certificate).

2) the superset of the stale, static information with real-time, fresh, dynamic, and aggregated information provides better quality risk management than an opinion as to the staleness of the certificate static information (at effectively the same cost).

3) given the same cost .... and greater value information for better risk management .... the cost/benefit analysis would nearly always benefit the real-time, fresh, dynamic aggregated response compare to an opinion about the degree of static information staleness.

4) the real-time, fresh, dynamic and aggregated information potentially provides the ability to piggy-back an actual business transaction as part of the underlying online operation (for little or not additional cost) .... this is the payment scenario.

5) for cost/benefit of risk management associated with real-time, fresh, aggregated, and/or dynamic may represent such a compelling business justification that all operations become online. For environment with all online operations, using real-time, fresh, aggregated, and dynamic information, then an offline certificate with stale, static information (that is a subset of real-time, fresh, aggregated and dynamic information) become totally redundant and superfluous. Certificates are at least redundant and superfluous for those transactions involving real-time, fresh, aggregated, and/or dynamic operations (if RP is getting the real-time superset information ... then the stale, static, subset information isn't needed).

So the question I believe was a value proposition for OCSP that

1) involves value that justifies having online, real-time infrastructure

2) doesn't involve payments or money (as per somebody else's earlier posting since it has already been shown that money infrastructure does a piggy-back transaction based on real-time, fresh, dynamic, and aggregated information).

3) only requires an opinion as to the staleness of static information (yes/no)

4) has no incremental business justification for real-time, fresh, dynamic and/or aggregated information.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

OCSP and LDAP

Refed: **, - **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 02:45 PM
To: "Ambarish Malpani" <ambarish@xxxxxxxx>
cc: "IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: RE: OCSP and LDAP
the thesis was that OCSP could provide a status indication as to the staleness of the static information in a certificate designed for offline operation.

an online service can provide almost any level of real-time, fresh, dynamic, and/or aggregated information. For a value business operation using online, real-time, fresh, dynamic, and/or aggregated information (that is a superset of the stale, static information in a certicate designed for offline operation) ... then both the certificate becomes redundant and superfluous and therefor an OCSP transaction as to the degree of staleness of the stale, static information becomes superfluous.

furthermore ... it has been trivial to show that for an operation involving transfer of money .... that the actual transaction for the transfer of money can be piggy-backed with the online, real-time, fresh, dynamic and/or aggregated information operation ... at effectively no additional cost .... and that then both a certificate and any OCSP are redundant and superfluous.

I can have online, real-time, fresh, dynamic, and/or aggregated information operation. Such information is a superset of offline, stale, static certificate-based information. If i'm using the online, real-time, fresh, dynamic, and/or aggregated information .... then the offline, stale, static certificate-based information is redundant and superfluous.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

OCSP and LDAP

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 04:50 PM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: "Ambarish Malpani" <ambarish@xxxxxxxx>,
    "IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: Re: OCSP and LDAP
but the whole point of an offline credential containing certified information is i can reference it offline. if i'm online .... i don't need a certificate.

in the non-certificate model .... i assert that i have account number #xxx and sign the assertion with hardware token.

that is sent off to the online infrastructure and it pulls up #xxx and verifies the signature with the public key in the online record.

the state of the account and all related information is pulled from the account. there is no offlne certificate containing any certified stale, static information.

in the payment transaction ... i make two assertions that I have account number #xxx and that i'll pay xyx .... I sign that assertion .... it is sent off .... the account #xxx is pulled up .... it checks the signature with the public key in the account record .... it checks the status of the account and decides about the payment and returns yes/no as to the payment (and whatever other information). No offline certificate with certified stale, static information is needed.

for the license ... I make an assertion that i have license #abc and sign the assertions with a hardware token. the police sends off the assertions .... which verifies the public key and pulls up the record .... either direclty containing all the real-time, fresh, dynamic, and/or aggregating information .... or contains enuf information to aggregate all the information. still no offline certificate with certified stale, static information is needed The police is using the online, realtime, fresh, dynamic, and/or aggregated information. The police doesn't have to resort to a offline credential containing a stale, static subset of the online, realtime, fresh, dynamic and/or aggregated information. It is purely an artificial contrivence to claim that a offline certificate with stale, static information provides any added value at the point when all of the online realtime, fresh, dynamic and/or aggregated information is available.

i only need a certificate with stale, static information for offline operations where i don't have access to online, real-time, fresh, dynamic, and/or aggregated information. If I have a superset of the stale, static information in a certificate .... then the certificate is redundant and superfluous for that operation. If the certifcate is redundant and superfluous .... then an OCSP operation that gives an opinion about the staleness of the redundant and superfluous stale, static information is also redundant and superfluous.

If the value proposition is such that I always resort to the online, real-time, fresh, dynamic and/or aggregated information .... then for those value propositions for an offline certificate with stale, static information is always redundant and superfluous. If the offline certificate with stale, static information is always redundant and superfluous ... then it would follow that an OCSP for a redundant and superfluous certificate is also redundant and superfluous.

so there is slight vulnerability issue here. not only is the certificate in somebody's position subject to being stale, static information ... but it is also potentially subject to counterfeiting (picture, name, address, birth date, etc). for low value operations with little risk .... the risk of counterfeited license is low. For high value operations with potentially more risk .... the "real" information is stored under strong security at the appropriate agency. The thing that is in somebody's hand is purely a stale, static offline copy of the real information stored online someplace. The law enforcement are bringing up the "real" information when they go onlne .... the stuff in your hand is basically a stale, static copy purely for low value, low risk, offline operations.

the claim that in an online environment .... that it is sufficient to have an authentication mechanism .... it isn't necessary in a real online environment to have any stale, static copy of the real information carried around on your person in a certificate for use in offline operations. If there are no offline operations .... then stale, static copies designed for use in offline operations are redundant and superfluous. For online operations, stale, static copies designed for offline use are redundant and superfluous when the real, online, fresh, realtime, dynamic and aggregated information is available.

-- Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

"Anders Rundgren" on 01/05/2003 04:08 PM wrote:
>Lets take another example ... driver's license. If you get stopped
>.... the typical real-time, online response isn't about whether the
>license is revoked or not (that is trivial subset) .... it is how many
>traffic citations/warrents are outstanding .... number of parking
>tickets, and potentially some number of other pieces of real-time,
>dynamic information.

No problems.

The licensee authenticates to the "traffic police server" which uses OCSP to verify that the TTP-issued license is not revoked. Assuming the license was OK the server then invokes other "authorities" for any additional information needed using the identity as given in the license (certificate). The result is returned as a nicely formatted screen on the officer's PDA. Except for the fact that the screen is static [:-)], I don't see any particular staleness here. Unless for the possible reliance on CRLs you have a problem with. But CRLs are just an option.

But you do have a point. To put a lot of potentially stale information in a certificate is a bad idea. "Employee certificates" is an example of a broken scheme as they vouch for not less than three things: An individual, an organization, and an unspecified [= totally useless] association between these two entities. Here I really believe that your on-line, real-time paradigm will become the norm.

<snip>

Anders


OCSP and LDAP

Refed: **, - **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 05:04 PM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: "Ambarish Malpani" <ambarish@xxxxxxxx>,
"IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: Re: OCSP and LDAP
the driver's license in your hand isn't the real driver's license .... it is a stale, static copy made at some time in the past. the real driver's license is in some agency's database record. the issue about whether the real license is valid or not is stored there also. all the dynamic, fresh, aggregated, and/or realtime data is stored there ... or is pointed to by that record (if you want all the online, realtime, fresh, dynamic and/or aggregated information ... you have to read the real "license" record).

for low value &/or low risk operations ... the stale, static copy that you hold will be sufficient. for situations that justify the cost of an online transaction ... to get the real-time, fresh, dynamic, and aggregated real information ... they go online to get the real information.

somebody types in a driver's license number to the online system ... it could just spit back just a simple yes/no regarding the staleness of the static, r/o copy in the person's possesion. however, if somebody is going to the trouble of going online .... they type in the driver's license number to the online system .... and they get back the real license, with all the real-time, fresh, dynamic, and/or aggregated information. any information claims by the stale, static, static copy in the person's position .... at that point are redundant and superfluous.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

OCSP and LDAP

Refed: **, - **, - **
From: Lynn Wheeler
Date: 01/05/2003 05:15 PM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: "Ambarish Malpani" <ambarish@xxxxxxxx>,
"IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: Re: OCSP and LDAP
... aka ... i never said you can't have a certificate with stale, static, subset copy of the real information .... i just said that in an online environment where you have the real-time, fresh, dynamic, &/or aggregate of the real information ... then the stale, static, subset copy is redundant and superfluous.

for a particular business/value operation, if the stale, static, subset copy is redundant and superfluous ... then it seems to follow that an OCSP transaction giving an opinion about the staleness of redundant and superfluous information is also superfluous.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

OCSP and LDAP

Refed: **, - **
From: Lynn Wheeler
Date: 01/07/2003 06:42 AM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: "Ambarish Malpani" <ambarish@xxxxxxxx>,
"IETF PKIX" <ietf-pkix@xxxxxxxx>
Subject: Re: OCSP and LDAP
as been mentioned before ... it is relatively simple to see the information in certificates as form of distributed read/only cache entries ... with lots of similarities to cpu caches, database caches, filesystem caches, distributed/network databases, distributed/network filesystems, etc.

the data in the certificates is stale by defintion ... if it wasn't ... it wouldn't be necessary to have an OCSP that basically is asking if it is too stale.

some ten plus years ago i was at a acm sigmod conference and asked somebody what this x.500 stuff was ... and was told it is a bunch of networking types trying to re-invent 1960s database technology.

random past refs:
https://www.garlic.com/~lynn/aadsmore.htm#time Certifiedtime.com
https://www.garlic.com/~lynn/aadsm5.htm#faith faith-based security and kinds of trust
https://www.garlic.com/~lynn/aadsm8.htm#softpki19 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aepay4.htm#visaset2 Visa Delicately Gives Hook to SET Standard
https://www.garlic.com/~lynn/aepay6.htm#crlwork do CRL's actually work?
https://www.garlic.com/~lynn/aepay10.htm#77 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#43 Can I create my own SSL key?

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

OCSP and LDAP

Refed: **, - **, - **
From: Lynn Wheeler
Date: 01/08/2003 05:37 AM
To: pgut001@xxxxxxxx (Peter Gutmann)
cc: ambarish@xxxxxxxx, anders.rundgren@xxxxxxxx, ietf-pkix@xxxxxxxx
Subject: Re: OCSP and LDAP
on the other hand ... there some book someplace that makes the claim that relational set the state-of-the-art back 20 years.

I was somewhat involved having done some support infrastructure for system/r and then involved in the technology transfer of system/r from sjr to endicott for sql/ds (before the technology transfer back from endicott to stl for db2 .... note that SJR/bld28 & STL/bld90 are like 10 miles apart .... with both SRJ/STL on the west coast and endicott nearly on the east coast).

slightly related:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

some archeological tales:
https://www.garlic.com/~lynn/2000.html#18 Computer of the century
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002g.html#60 Amiga Rexx
https://www.garlic.com/~lynn/2002h.html#17 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2002i.html#69 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002k.html#9 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002l.html#71 Faster seeks (was Re: Do any architectures use instruction
https://www.garlic.com/~lynn/2002n.html#36 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002o.html#54 XML, AI, Cyc, psych, and literature
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

ptut001@xxxxxxxx on 1/7/2003 10:52 pm wrote:
I've used that explanation too :-). The conversion went something like this:

Other person: "Why is X.500 so special? Why is no-one else doing this?"

Me: "Get your favourite book on database technology and look up 'Hierarchical databases'".

[Time passes]

Other person: "I looked in several books. Many didn't mention it at all, and one had a half-page historical note saying it's something that was obsoleted by better technology more than two decades ago".

Me: "Exactly".

Peter.


OCSP and LDAP

From: Lynn Wheeler
Date: 01/08/2003 05:51 AM
To: pgut001@xxxxxxxx (Peter Gutmann)
cc: ambarish@xxxxxxxx, anders.rundgren@xxxxxxxx, ietf-pkix@xxxxxxxx
Subject: Re: OCSP and LDAP
total trivia:
from the previous aadsm5 references to:
https://www.garlic.com/~lynn/95.html#13

two of the people in the conference room went on to the small client/server startup involved in this thing called SSL & HTTPS ... one had been involved in the tech. transfer from SJR to Endicott for SQL/DS and one had been involved in the tech transfer back from Endicott to STL for DB2.

Of all the people in the meeting .... I believe only one is still working for the same employer they were then ... and that case isn't exactly considered employee ... most recently there has been some stuff about him getting ready to compete with some sailing team from "down under".

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

X.500, LDAP Considered harmful Was: OCSP/LDAP

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 01/26/2003 10:46 PM
To: Dean Povey <povey@xxxxxxxx>
cc: ambarish@xxxxxxxx, anders.rundgren@xxxxxxxx,
Tony Bartoletti <azb@xxxxxxxx>, ietf-pkix@xxxxxxxx,
"Hallam-Baker, Phillip" <pbaker@xxxxxxxx>,
pgut001@xxxxxxxx, povey@xxxxxxxx
Subject: Re: X.500, LDAP Considered harmful Was: OCSP/LDAP
my original suggestion for SSL "certificates" stored in DNS .... was to just use signed something slightly more than signed public keys. as opposed to full x.509 asn.1 encoded certificates ... and piggyback it with the ip-address response. that easily fits within the 512-byte limit. if you want additional information .... you do the more detailed queries for the additional information associated with a domain name ... however you don't get all the additional information that might be bound to a domain name unless you expressly ask for it.

to some extent because certificates bind information statically at some point in the past with possibly no real anticipation for all the uses it might be put ... there is a tendency to try and pack as much as possible in the static binding. going to a much more dynamic infrastructure would mitigate significantly trying to maximize value of a static information certificate binding (and thereby creating worst case payload bloat for all cases).

nslookup example ...
> help
Commands:   (identifiers are shown in uppercase, [] means optional)
NAME            - print info about the host/domain NAME using default server
NAME1 NAME2     - as above, but use NAME2 as server
help or ?       - print info on common commands
set OPTION      - set an option
all                 - print options, current server and host
[no]debug           - print debugging information
    [no]d2              - print exhaustive debugging information
[no]defname         - append domain name to each query
    [no]recurse         - ask for recursive answer to query
[no]search          - use domain search list
[no]vc              - always use a virtual circuit
domain=NAME         - set default domain name to NAME
    srchlist=N1[/N2/.../N6] - set domain to N1 and search list to N1,N2, etc.
root=NAME           - set root server to NAME
    retry=X             - set number of retries to X
timeout=X           - set initial time-out interval to X seconds
type=X              - set query type (ex. A,ANY,CNAME,MX,NS,PTR,SOA,SRV)
querytype=X         - same as type
    class=X             - set query class (ex. IN (Internet), ANY)
[no]msxfr           - use MS fast zone transfer
    ixfrver=X           - current version to use in IXFR transfer request
server NAME     - set default server to NAME, using current default server
lserver NAME    - set default server to NAME, using initial server
finger [USER]   - finger the optional NAME at the current default host
root            - set current default server to the root
ls [opt] DOMAIN [> FILE] - list addresses in DOMAIN (optional: output to FILE)
    -a          -  list canonical names and aliases
-d          -  list all records
-t TYPE     -  list records of the given type (e.g. A,CNAME,MX,NS,PTR etc.)
view FILE           - sort an 'ls' output file and view it with pg

============================

also some discussion in rfc2151/fyi30, Internet & TCP/IP Tools & Utilities, Section 3. Finding Information About Internet Hosts and Domains).

https://www.garlic.com/~lynn/rfcidx7.htm#2151
2151
A Primer On Internet and TCP/IP Tools and Utilities, Kessler G., Shepard S., 1997/06/10 (52pp) (.txt=114130) (FYI-30) (Obsoletes 1739)

from above:

One additional query is shown in the dialogue below. NSLOOKUP examines information that is stored by the DNS. The default NSLOOKUP queries examine basic address records (called "A records") to reconcile the host name and IP address, although other information is also available. In the final query below, for example, the user wants to know where electronic mail addressed to the hill.com domain actually gets delivered, since hill.com is not the true name of an actual host. This is accomplished by changing the query type to look for mail exchange (MX) records by issuing a set type command (which must be in lower case). The query shows that mail addressed to hill.com is actually sent to a mail server called mail.hill.com. If that system is not available, mail delivery will be attempted to first mailme.hill.com and then to netcomsv.netcom.com; the order of these attempts is controlled by the "preference" value. This query also returns the name of the domain's name servers and all associated IP addresses.

The DNS is beyond the scope of this introduction, although more information about the concepts and structure of the DNS can be found in STD 13/RFC 1034 [19], RFC 1591 [21], and Kessler [16]. The help command can be issued at the program prompt for information about NSLOOKUP's more advanced commands.


===


https://www.garlic.com/~lynn/rfcidx3.htm#1035
1035 S
Domain names - implementation and specification, Mockapetris P., 1987/11/01 (55pp) (.txt=122549) (STD-13) (Updated by 1101, 1183, 1876, 1982, 1995, 1996, 2136, 2181, 2308, 2535, 2845, 3425) (DOMAIN)


https://www.garlic.com/~lynn/rfcidx5.htm#1591
1591
Domain Name System Structure and Delegation, Postel J., 1994/03/03 (7pp) (.txt=16481)

somewhat related discussions:
https://www.garlic.com/~lynn/aepay10.htm#78 ssl certs
https://www.garlic.com/~lynn/aepay10.htm#79 ssl certs
https://www.garlic.com/~lynn/aepay10.htm#80 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#81 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#82 SSL certs & baby steps (addenda)
https://www.garlic.com/~lynn/aepay10.htm#83 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#84 Invisible Ink, E-signatures slow to broadly catch on (addenda)

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

povey@xxxxxxxx on 1/26/2003 6:10 pm wrote:
While I don't disagree with your argument about X.500, there are real practical problems with using DNS for storing Certificates (if that is indeed what you mean by a DNS linked PKI). RFC2538 notwithstanding, the problem is that storing large objects like Certificates in DNS generally necessitates TCP transfers as they will exceed the magic 512 byte limit at which servers will generally truncate packets. Using TCP means that DNS which is a generally stateless service, now has to keep connection state with clients opening up all sorts of room for DOS and simple performance problems with what is a critical service. In addition, it is common for admins to configure firewalls so that TCP DNS is filtered to prevent zone transfers.

Kansas kicks of satewide PKI project

From: Lynn Wheeler
Date: 01/29/2003 07:19 AM
To: Digital Signature discussion <DIGSIG@xxxxxxxx>
Subject: Kansas kicks of satewide PKI project

https://web.archive.org/web/20040703142016/http://www.washingtontechnology.com/news/1_1/daily_news/19935-1.html
also
https://web.archive.org/web/20040225115817/http://www.gcn.com/vol1_no1/daily-updates/21004-1.html

01/28/03
Kansas kicks off statewide PKI project
By Dipka Bhambhani
GCN Staff

Kansas today began issuing digital certificates to employees to use with a planned statewide public-key infrastructure from VeriSign Inc. of Mountain View, Calif.

Ultimately, Kansas plans to issue certificates to all its employees for use on the PKI created by VeriSign Inc. of Mountain View, Calif., said Ron Thornburgh, secretary of state for Kansas. The statewide PKI effort has been in development for six years with representatives from 15 organizations.

Kansas took a statewide approach to avoid having to integrate separate systems later, said Janet Chubb, assistant secretary of state.


... snip ...
--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Antwort: Re: Real-time Certificate Status Facility for OCSP - (RTCS)

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/05/2003 08:37 AM
To: Stephen Kent <kent@xxxxxxxx>
cc: ietf-pkix@xxxxxxxx, Olaf.Schlueter@xxxxxxxx,
    pgut001@xxxxxxxx (Peter Gutmann)
Subject: Re: Antwort: Re: Real-time Certificate Status Facility for
    OCSP - (RTCS)
i would claim in order to activate a digital signature, that the user needs to perform the RA function ... sending a signed message ... nominally containing a copy of the public key and some other identification information ... and this needs to be done regardless of whether the private kye originates with the user or is pushed out to the user. in the pre-push case, just because a private key push agency .... also acting as a CA, happens to pre-push a copy of the certificate prior to the RA function is an anomoly of the process.

in effect, what binds/activiates the digital signature process is the RA operation .... not the certificate .... the existance of a certificate is an implementation anomoly of the overall business process.

my bias has always been that the digital signature binding process is represented by the binding in the RA process .... independent of any existance of a certificate used in scenarios for trusted key distribution for offline environments ... aka the RA binding for the digital signature proccess is independent of the existance of a certificate as one means of representing that the RA business process has occured. in my AADS claims, online methods of indicating that valid RA process has been satisfied can be superior to generation of certificates as a representation of a valid RA business process.

I would claim that the direct equating of certificates to valid digital signatures contributes to the confusion. the RA binding process is what is used for establishing the basis for valid digital signatures. certificates (should be) just one method of representing that such a RA binding business process has occured. The confusion isn't that an agency can generate a private key and a certificate and push both out to the end user .... the confusion is the automatic acceptance that because a certificate exists ... it is equivalent to a valid digital signature binding process; the certificate should be just considered one way of representing a RA-binding process having been performaned.

of course, random refs:
https://www.garlic.com/~lynn/x959.html#aads

as noted in other recent threads there is sometimes confusion of asymmetric cryptography with the business processes of information hiding/secrecy and the business process of digital signature authentication. frequently for digital signatures, the business process may require that the private key never be known by anybody (and can only exist in a unique hardware token) .... while at the same time information hiding/secrecy will require private keys be escrowed (access to valuable corporate assets only accessible via a specific private key will mandate escrow for business continuity purposes). the technology is the same in both business processes .... but there are different secrecy requirements with regard to the treatment of the private key.

asymmetric crypto vis-a-vis public key business process ... hiding/secrecy and authentication/digital signature:
https://www.garlic.com/~lynn/2001g.html#14 Public key newbie question
https://www.garlic.com/~lynn/2001j.html#11 PKI (Public Key Infrastructure)
https://www.garlic.com/~lynn/2002i.html#67 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002i.html#78 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002l.html#5 What good is RSA when using passwords ?
https://www.garlic.com/~lynn/2002l.html#24 Two questions on HMACs and hashing
https://www.garlic.com/~lynn/2002o.html#56 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2003.html#19 Message (authentication/integrity); was: Re: CRC-32 collision
https://www.garlic.com/~lynn/2003b.html#30 Public key encryption
https://www.garlic.com/~lynn/2003b.html#41 Public key encryption
https://www.garlic.com/~lynn/2003b.html#64 Storing digital IDs on token for use with Outlook

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

kent@xxxxxxxx on 2/4/2003 7:02 pm wrote:
Olaf,

><SNIP>
>In Germany the german signature law is identifying a fourth case:
>4. the cert is in the repository, but not active yet (cert invalid,
>maybe valid in the future)
>
>This case is required (by law) if a CA issues not only certificates
>but private keys as well to the end user. Think of a bank producing
>and delivering a smartcard with keys and certificates on it to
>you. As long as you did not confirm the receipt of the card to the CA
>the CA must protect you by having the certificate "on hold" so during
>transport no valid signatures can be created. This may be handled by
>an "onHold" status on a CRL but is currently deployed in Germany
>using white list technology.

This seems an unnecessary complication. The CA could simply wait to post the cert to a directory until the user acknowledges receipt. I'm not in favor of adding complexity when there are other, simpler solutions to a problem.

>The second reason why german electronic signature technology is
>requiring a white list check is another obligation by law, namely
>that it is not allowed to render correctly issued user certificates
>invalid due to a CA key compromise (or any other kind of CA ceases
>operation). This is again achieved with white list technology and
>status information signed by a key with independent security. RTCS
>would fit well in here.

My response to Simon addresses this issue, i.e., German law seems to have mandated a solution to a problem that is not technically justified, given the alternatives. PKIX is not in the habit of setting standards to accommodate national level laws that may be ill advised.

Steve


A challenge

From: Lynn Wheeler
Date: 02/10/2003 05:18 PM
To: "Simon Tardell" <simon@xxxxxxxx>
cc: "'Anders Rundgren'" <anders.rundgren@xxxxxxxx>,
"'Denis Pinkas'" <Denis.Pinkas@xxxxxxxx>, ietf-pkix@xxxxxxxx,
    "'Stephen Kent'" <kent@xxxxxxxx>, Olaf.Schlueter@xxxxxxxx
Subject: RE: A challenge
in fact ... any trusted source for the key can be used to validate the signature. a major point of the certificate is a business process that binds something else to the meaning of the signature (especially in an offline environment when there aren't recourse to online authoritative reference).

in the asuretee scenario ... the same hardware token (key) can be registered (aka the registration authority business process) with multiple different authoritative agencies .... potentially providing different bindings for signatures based on the context that the signature is used in.

in the encryption scenario (as opposed to the digital signature case), the sender already needs to have a table of public keys (which may or may not be in certificate form .... or stored already in un-encoded form for convenient usage) ... and must select the recipient's public key from the table.

a similar process is also possible in the digital signature case for something like a x9.59 financial transaction .... the financial institution can have pre-decoded the certificate and stored it in the account record at the time the consumer registers the account. It is then no longer necessary for the client to transmit the certificate as part of a digitally signed x9.59 financial transaction.

discussion of previously registered certification .... as well as (digital signature) certificates that have been compressed to zero bytes
https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2 Relying-Party Certification Business Practices

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

"Simon Tardell" on 2/10/2003 8:35 am wrote:

Anders,
> To have redundant certificates in case the CA goes haywire
> seems like a very peculiar solution and will force users to
> select the proper certificate. Which is?

Not at all. A signature is made with a key, not a certificate. As long as the identity of the certificate is not signed into the message, any certificate that corresponds to the right key can be used to validate the signature. The certificate may be sent with the signed message or obtained from somewhere else. It is largely the headache of the verifier.

The certificate that is used to verify the signature may even be issued after the signature is made. (And in fact, this would be a reasonable possible consequence of having CA redundancy -- after some time it could happen that both CAs that certify a certain key-identity-binding are replacement CAs deployed after the first signatures were made with the card)

For some reason the common paradigm is to let users chose a certificate associated with a key to imply the key. This practice requires the certificate to be present at the client. Or rather, a certificate. The certificate could be e.g. a self-signed cert (issued by the smart card itself at production time just to supply a GUI handle to applications looking for, and to supply a subject name for the end entity to claim, if the application needs one). This would remove the need to ever update the smart card. Or, as Sun would say, the network is the computer.

Simon

Simon Tardell, cell +46 70 3198319, simon@xxxxxxxx


A challenge (addenda)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/11/2003 09:23 AM
To: "Simon Tardell" <simon@xxxxxxxx>
cc: "'Anders Rundgren'" <anders.rundgren@xxxxxxxx>,
    "'Denis Pinkas'" <Denis.Pinkas@xxxxxxxx>, ietf-pkix@xxxxxxxx,
"'Stephen Kent'" <kent@xxxxxxxx>, Olaf.Schlueter@xxxxxxxx,
    epay@xxxxxxxx
Subject: RE: A challenge (addenda)
ref:
https://www.garlic.com/~lynn/aadsm13.htm#13

in fact, i would assert that is one of the short comings of mistaken the certificate being the binding ... as opposed to representing the business process of having done the binding.

in lots of businesses ... they will do their own binding business process .... based on their own business requirements. that is somewhat the appearance of the relying-party-only certificates as opposed to the earlier, one-size-fits-all, identity x509 certificates.

the issue is that the relying-party-only certificates are very analogous to a bank issuing a payment card .... originally one for every account or type of account. this is in the old offline days .... the merchant (standing in for the bank) in an offline environment examined the card and checked it against a paper booklet negative list mailed out once a month or so. in the transition to the online environment, a magstripe was placed on the back of the certificate .... and online transactions were being performed directly with the bank. this representing somewhat of a schizophrenia identity for the piece of plastic .... since it now carried the characteristics of operating both in the offline environment (the plastic credential and the paper booklet) or in an online environment (the magstripe on the back).

From a consumer perspective ... the incomplete migration to totally online .... has had a down side .... potentially tens of plastic cards needing to be carried around. The magstripe is just on the verge of not having to be unique. It still is shared-secret based (account number and pins) which standard security requires unique shared-secret for different security domains.

A migration to totally online and a migration to non-shared-secret (say digital signature public key) would eliminate the security requirement for 1) unique physical certificate/credential that is used in offline situations by stand-in parties/merchants responsible for examining the validity of the (plastic) certificate and 2) unique shared-secret.

The assertion is then that a single hardware token performing digital signature authentication can be used as binding device in multiple different contexts (since there is not the uniqueness security requirement that exists for shared-secret paradigms) and that the token would only require a certificate representing each of those binding contexts ... if the token were to be used in offline operations (requiring validation by 3rd parties not having access to the responsible binding authority). As long as the token was used in online transactions involving the business process that was also the binding authority .... or in business processes with entities that had online access to the binding authority ... then certificates representing such binding business processes are redundant and superfluous.

So the scenario is some future environment that is extensively digitally signature authentication oriented with some token. Continuing to emulate the old offline paradigm where a unique token/certificate exists for each context potentially means that individuals carry hundreds of such tokens. This is somewhat analogous to the evolving copy protection paradigm of the early 80s where each application required that a unique floppy disk had to be inserted. If this were to continue into todays environment an individual would have hundreds of copy protection floppy disks .... and constantly swapping floppy disks ... similarly one could imagine having hundreds of hardware tokens and having to constantly swap them.

Now there is some possibility of just returning to the simple, one-size-fits-all, identity x509 certificate. However, there is still the idea of binding a person to a specific bank account. Just because a person has a x509 identity certificate doesn't mean that they can draw money from every account at every bank (or the existence of a x509 certificate doesn't equate to being permitted to perform all possible business operations). If there is a single certificate ... and it represents all possible binding operations ... then either the information about all such bindings has to be carried in the same certificate ... or the bindings have to be available at some online location. So doing the mapping of the offline certificate paradigm to an online binding environment .... would imply that some value from the certificate is registered in the online binding registry/account-record. However, if some value is to be registered .... it is possible to have an account number and the binding registry directly registers the public key. This bypasses the levels of indirection involved in registering something about a certificate which is offline representation of some other business process that binds/registers something about the public key. If I were going to register something about a certificate in an online binding registry (say "is a specific entity entitled to withdraw money from this specific bank account") .... and a public key certificate is just an offline representation of some other online binding registry/authority (that just possibly isn't always online or directly accessible) ... I assert that it is possible to register the public key directly and dispense with the levels of indirection related to registering anything about the certificate.

So lots & lots of certificates ... each uniquely carrying some collection of possible binding attributes (like permissions, authorizations, etc). Or a few certificates .... that only carry a very few binding attributes related to a public key. Then individual operations carry online account records with the actual permissions and a mapping to some value in a certificate. However, it is possible in such situations to eliminate the indirection of registering the certificate that maps to the public key ... and just register the public key. Normally, levels of indirection help when they allow change w/o affecting all the binding registries .... aka in the internet domain name system .... I can use the host name www.something ... and not need to worry about it may having dozens of different ip addresses that can change over time. However, if i'm dealing with a certificate indirection infrastructure that I know is a direct mapping to a single public key and the same certificate will never mask a remapping to a totally different public key, then I can bypass the certificate indirection and just record the public key.

One such situation would be a generic employee certificate .... there are lots and lots of them .... all mapping to different public keys. There is an environment that doesn't have (or need) direct access to the online employee binding registry and only cares whether somebody has a valid employee binding (represented by their generic employee certificate) but doesn't actually care who the employee is (and the situation is low-value enough that it doesn't require real-time access to the real employing binding registry). However, this has the down-side attribute that it represents the lots & lots of certificate paradigm. a unique generic certificate is required for each unique (low-value) domain (work, home, ISP login, specific website, etc). As soon as something like ISP login requires specific authorization information .... it is either in the certificate ... or in an online account record. If it needs to register a unique value from a certificate that is unique for each authorized public key ... then it can go ahead and register the public key directly and dispense with any registering of anything related to a certificate (and some other agency's binding operation).

An ISP family certificate would work if it listed a dozen different public keys ... and the certificate could change the listed public keys in a trusted and reliable way that the ISP need never be aware that it was happening. However, in the internet domain name scenario this works because it is an online registry .... there aren't these offline credentials (representing the binding process) with long lifetimes floating around that can go bad and potentially need constant checking for revocation.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

A challenge

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/12/2003 01:47 PM
To: Torsten Hentschel <the@xxxxxxxx>
cc: Anders Rundgren <anders.rundgren@xxxxxxxx>,
ietf-pkix@xxxxxxxx, epay@xxxxxxxx
Subject: Re: A challenge
I assert that the traditional smartcards are somewhat like dinosours ... the signal just hasn't reached the brain yet.

I assert that the basic smartcard design point was during the '80s for a portable computing device .... that predated the availability of portable input/output capability. Basically you carried all the computing around in the 7816 smartcard form factor and used it at generic input/output stations. However, that market niche had disappeared sometime in the early 90s with the advent of portable input/output capability in the form of cellphones and PDAs. I would guess that up to that point there had been tens of billions of dollars in the technology that all had to be written off. To some extent the smartcard continues on with a little bit life of its own .... in part the investment has totally been written off .... and with zero-sum budgetting .... even if you only could leverage the earlier investment at a few cents on the dollar ... the technology continues to linger around.

So one of the places to leverage essentially the 7816 investment was supporting transferrable personality for cellphones. However, in the past year or two there have been articles on the incremental cost that providing such a reader represents in cellphone manufactur. Furthermore, this transferrable personality implementation still has several gaps .... the phone can be lost/stolen, there can be physical/electronic failure in the smartcard, and even the contact interface represents additional points of failure for the cellphone. The story line went something like an online backup/restore of the personality subsumes all the function of the smartcard as well as addresses all the additional shortcomings .... while negating the need for having the additional cost of the smartcard interface (as well as eliminating a point of failure in the cellphone). The issue is that even with only needing a few cents on the dollar (compared to the original 7816 investment) .... the smartcard in the cellphone still can't address all the requirements .... and the online back/restore which does address all the requirements then, obsoletes the necessity for the smartcard. And, as per the cellphone business discussions, cellphone manufactures can save some additional pennies on the cellphone manufactur as well as eliminating a point-of-failure (the contact infrastructure for sliding the card in/out).

There is a requirement to possibly uniquely identify the use of the cellphone. In the digital signature world ... that somewhat maps to a hardware token with an on-chip generated key-pair .... and the private key never leaves the device. This can be built into the cellphone .... either as ROM in an existing chip .... or as a separate chip somewhat like the TCPA strategy.

So a scenario is that as part of online restore into a new cellphone .... either a previous key-pair is backed-up and restored .... or that the device unique key is then re-registered in all the places that the previous cell phone had been registered. And as per my previous post in the thread ... the registration/binding business process is actually independent of whether or not a certificate is issued ... aka the original certificate design point (from somewhat the same '80s era as original smartcard) was a way of representing the registration/binding business process for an offline environment (which had no direct access to the authoritative registry).

https://www.garlic.com/~lynn/aadsm13.htm#14 A challenge (addenda)

One problem is that even if the smartcard was purely restricted to the protection and digital signature use of a private key .... there still is the whole unresolved issue of lost/stolen devices ... and any moving parts (even just slipping card in/out past contacts) are more prone to failure than purely electronic operation. Note this is also given as one of the reasons for the migration to ISO 14443 and away from ISO 7816 is because of the mechanical failure issues ... especially in high-traffic areas. This brings up the whole issue of possible competing contactless technologies coming to bear .... 14443, 802.11, bluetooth, cellphone, etc.

some tcpa related discussions
https://www.garlic.com/~lynn/2002i.html#71 TCPA
https://www.garlic.com/~lynn/2002j.html#55 AADS, ECDSA, and even some TCPA
https://www.garlic.com/~lynn/2002n.html#18 Help! Good protocol for national ID card?

also
https://www.garlic.com/~lynn/x959.html#aads

has URL to a talk on assurance that i gave at the TCPA track at past intel developer's conference.

-- Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Torsten Hentschel on 2/12/2003 1:31 am wrote
Hi Anders,

... snip ...

> ============================================
> But frankly I see no future in smart cards of the type Germany
> is investing in. A mobile phone is such a tremendously
> more powerful "container" that allows users to do things
> that smart card owners cannot even _dream_ about. Since
> mobile phones are on-line by definition, static key and
> certificate schemes then become rather irrelevant.
> ============================================

Well, mobile phones contain smart cards, don't they? I do absolutely not see the dead end here.

Kind regards,

Torsten
--
Torsten Hentschel
[The positions of mine as outlined above are not neccessarily


A challenge

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/13/2003 09:27 AM
To: epay@xxxxxxxx
cc: "Anders Rundgren" <anders.rundgren@xxxxxxxx>,
ietf-pkix@xxxxxxxx, "Torsten Hentschel" <the@xxxxxxxx>
Subject: RE: A challenge
my previously post in this thread was more on the semantics of certificates
https://www.garlic.com/~lynn/aadsm13.htm#14 A challenge (addenda)

rather than the most recent post
https://www.garlic.com/~lynn/aadsm13.htm#15 A challenge
which is about 7816 contact smartcard. one of the issues is that 7816 also pretty much dictates the packaging and form factor. migration away from 7816 contact to any of the contactless/wireless conventions also can totally open up the packaging and form factor issues.

my certificate assertion wasn't that you can't have certificates representing a binding/association registration process, my assertion was that the certificates .... were actually that ... just a representation of a binding/association registration process (along with a side note that in an online environment the use of certificates as a representation of the binding/association registration process would frequently redundant and superfluous ... not that it wouldn't work).

My primary points were 1) that it seems that sometimes that the certificates are mistaken for the binding/associationr registration business process ... as opposed to just a representation of the binding/association registration business process (originally targeted for recipients who would never have been involved in any binding/assication registration business process) .... and 2) that sometimes such semantic confusion can complicate processes with regard to and which are addressing the binding/association registration business process and which are addressing just the management of things that represent that binding/association registration business process (especially in an online environment when having separate representations can be redundant and superfluous).

However, just because something may be redundant and superfluous doesn't mean that it can't work.

The issue in something like login .... is supposedly something has been registered to allow login .... in the PKI/certificate case, presumably what has been registered is some information that is bound in a PKI certificate (name, serial number, etc) ... in addition, the login process has the public key registration of acceptable certification authorities (that can be used to validate acceptable certificates to the login process). I've never claimed that doesn't work ... I've just claimed that frequently it is possible to show semantic equivalence between the login process registration of some value out of a certificate and the login process registration of the public key directly;

The PKI registration process provides a type of information mapping indirection (somewhat like DNS provides a real-time mapping between hostnames and ip-address) .... where the CA digital signature asserts to the equivalence of the information (like a person's name or userid) bound in a certificate and a public key. The login process then has a set of registered CA public keys. In the PKI scenario, the login process gets a signed message that has an appended digital signature and an appended certificate. The login process finds the registered CA public key and validates the certificate, it then uses the public key in the certificate to validate the signed message. At this point it presumably has a proven equivalence between the entity sending the message and some bound information in the certificate (like a name). The login process can then have its own registry that maps between names found in certificates and userids .... and uses that to establish a valid login for that userid. The informaiton assurance mapping goes

I've never asserted that it doesn't work. I have asserted that in situation where the login process has its own registery and mapping; that directly registering an entity's public key bound to a userid .... can be equivalent to a entity registering their public key with a CA, the CA signing a certificate that equates a name in a certificate with that public key, and then the login process registering a name in a certificate bound to a userid. The issue isn't that the levels of PKI information indirection (between public key and userid) don't work, the point was purely that they may be redundant and superfluous.

This is somewhat equivalent to the IETF pk-init draft for Kerberos which provides for both certificate (aka PKI) based registration as well as certificate-less based registration .... aka directly registering the entity's public key. The business process of registering a name from a certificate as equivalent to a userid .... can be shown to be the same as directly registering a public key as equivalent to a userid. The issue then is whether all the additional business processes in support of the various registering process providing information indirection between the name in the certificate and the entity's public key provides real value or is purely redundant and superfluous (given that there has to be a complex infrastructure for not only creating the certificates .... but once created these certificates now also have to be managed).

Now the intersection with certifications and smartcards (actually hardwoare token) .... was whether there was a unique token/certificate pair for every environment or whether there was one (or a very small number of) token(s) could be used across a multitude of environments. If the environment supported a registration process, then the above assertion that public key can be directly registered would support that the public key of a single token could be registered in multiple environments. The counter-argument is some future environment that has migrated to hardware token authentication potentially requiring a unique token per environment and an entity possibly having to manage hundreds of such tokens.

So an additional assertion is an ideal place for certificates is where there isn't an additional registration process and the certificate is sufficent for establishing both authentication as well as permissions (aka all/any generic employee certificate gets an entity through the door).

A corresponding scenario would be a system login that has no userid directory/registration and when presented with a login message with an attached certificate .... it is possible for the login process to establish the complete user environment from just the contents of the certificate (not requiring a userid directory/registration at all).

The problem is that tends to require a unqiue certificate for each environment ... which may or may not imply a corresponding unique hardware token. If it requires a unique hardware token, then the future possibility is that people walk around with hundreds of hardware tokens.

Moving it out of the digital signature domain but still in the area of authentication might be biometrics. There is the possibility that a login process directly registers a person's fingerprint in the userid directory. The person then types in their userid and applies their finger to a fingerprint sensor. I assert that the direct registration of a person's fingerprint in a userid directory is equivalent to the direct registration of a public key in the userid directory. However, the downside is that a person's fingerprint is, in effect, a shared-secret ... anybody copying the fingerprint representation and being able to regenerate it can impersonate that entity. The upside of public key registration is that it isn't a shared-secret, knowledge of the public key isn't sufficient to impersonate.

So sort of a summary .... 1) understanding that the binding/registration process is indenpendent of certificates that represent that process can result in better designs dealing with the two sepaate issues and 2) when there are additional registries (like userid directories) it is possible to view certificates as a type of information indirection and that such levels of indirection may be redundant and superfluous (not that they can't work just that they are redundant and superfluous).

some kerberos related threads:
https://www.garlic.com/~lynn/subpubkey.html#kerberos

and somewhat topic drift ... redundant and superfluous may represent additional complexity and cost that are hard to justify. From recent discussion about DNS, KISS, theory, and practice:
https://www.garlic.com/~lynn/aadsm12.htm#9 Crypto Forum Research Group ... fyi
https://www.garlic.com/~lynn/2003c.html#24 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#56 Easiest possible PASV experiment
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment

-- Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

tjones@xxxxxxxx on 2/12/2003 8:21 pm wrote:

For better, or worse, the current sc logon to o/s's uses a cert in the sc. It could be argued that the sc really only needs to cache the user's name in an insecure manner, but the point is that a PKI is used to authenticate the user w/ the sc. It seems to work. It's hard to imagine how it could be improved by some structure other than a PKI. There is no biz rule associated w/ the authentication.

I doubt that u can argue that it does not work. I am sure that a better sol'n is possible, that is always true.

So, let's just use sc w/ cert for logon authN. Ok? ..tom


A challenge

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/13/2003 10:22 AM
To: Al Arsenault <awa1@xxxxxxxx>
cc: Anders Rundgren <anders.rundgren@xxxxxxxx>,
epay@xxxxxxxx, ietf-pkix@xxxxxxxx, Torsten Hentschel <the@xxxxxxxx>
Subject: Re: A challenge
the article from the cellphone manufactures .... that it was less expensive to provide a mechanism to save the current personality and load it into a new phone than it was to manufactur the slot for the smartcard reader interface ... with a overall infrastructure cost savings. the corresponding assertion was that the current smartcard & smartcard interface was as inexpensive as it was because of the significant investment that was made for generic, unbiquitous smartcard use .... not solely restricted to cellphone personabilities. and finally people could use the personality save/restore to address issue where the smartcard is lost/stolen/damaged.

My other assertion was that the existing operation may linger for some time .... since the original smartcard investment has appeared to be written off .... and smartcard use can essentially be had for a few cents on the dollar. The transition to save/restore requires new investment in the save/restore infrastructure .... which means that the cost savings in eliminating the smartcard reader in cellphones would have to cover the save/restore infrastructure investment (and/or people find the additional benefit of having lost/stolen/damaged coverage).

Note that the cost savings of using a common smartcard infrastructure is so great that the cellphones and the simms share the same manufacturing components .... down to simms being manufactured as smartcards ... and then the simms punched out of the plastic smartcard form factor. If the other uses of smartcards migrate to various contactless/wireless paradigms for various reasons .... that could mean that cellphone market segment would carry more of the smartcard infrastructure costs (even at a few more cents on the dollar) making it easier to justify the funding of a save/restore infrastructure transition.

I didn't say smartcards were wrong. I said that they had a design point based on the technology from the '80s ... and that changed by the early '90s .... effectively making the technology assumptions and requirements from the '80s no longer valid.

I wasn't making them (OSI comments) very loud in 1983 ... I was doing some other things, although there were jokes about OSI being from pre-1970s, telco, point-to-point copper, high error rate, low bandwidth design point "people" ... aka by the time OSI standard was passed it was also obsolete. While i was doing a little tcp/ip stuff in the 1983 era .... it wasn't a primary focus and I didn't get involved into any OSI issues into slightly later.

I was starting to get more vocal about OSI by possible 1985 .... I was starting to become more vocal about it. By interop '88 I was pretty vocal .... there were all these things in there with OSI, x.400, complex monitoring. One of the things at interop '88 ... i would guess that there was even some significant percentage of the IETF community was supporting various of the alternate candidates to SNMP. I had a couple workstations in booth kiddy-corner from the booth where case was "officially" located ... and was able to get Case to install SNMP on the workstations to demo .... even tho large portions of the rest of the booths had various SNMP alternatives.

I was also on the XTP technical advisory board and working on HSP. The majority of the people in X3S3.3 were giving us really bad time because X3S3.3 had responsibility for standards related to OSI levels 3&4 ... and the official charter is that you can't have standards that violate OSI levels 3&4. My response was 1) ISO & ANSI are already totally schizophrenic because ethernet standard thru IEEE and recognized as standard at ISO combined levels 1, 2, and part of 3 ... i.e. the MAC interface includes a portion of routing from level 3. HSP was going to go directly from transpart/level4 to LAN interface in the middle of level3. Core X3S3.3 basically said that you couldn't have protocol standard work in X3S3.3 that specified an interface to MAC/LAN .... because that violated OSI. I've had more than a few choice words about gosip also.

slightly related .... my wife and I were operating a backbone at the time of NSFNET1/T1 bid ... and weren't allowed to bid ... but a NSF audit of our backbone claimed it was at least five years ahead of all the NSFNET1/T1 bids to build something new. We were also doing some interesting/non-traditional things with crypto at the time ... but that is another story. However we did come up also with 3-layer architecture .... somewhat grew out of a response that my wife wrote and presented to a certain TLA gov agency RFI for enterprise distributed computing environment. That got us into trouble from various factions that were trying to put the client/server genie back into the bottle:
https://www.garlic.com/~lynn/subnetwork.html#3tier

OSI was possibly a bad choice to choose as an example. However, something that I was really agitated about in the early 1980s time-frame was that the traditional mainframe computing environment was starting to commoditize .... first with mid-range computers and then with PCs and workstations. Maybe another example is this thing called electronic commerce.

lots of threads related to OSI "blech":
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

some recent discussions about interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop

past threads about high speed networking in the '80s
https://www.garlic.com/~lynn/subnetwork.html#hsdt
https://www.garlic.com/~lynn/subnetwork.html#internet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

random past specific mentions regarding gosip:
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002m.html#59 The next big things that weren't
https://www.garlic.com/~lynn/2002n.html#42 Help! Good protocol for national ID card?

-- Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Al Arsenault on 2/13/2003 5:43 am wrote:

Lynn,

Interesting message from you, as always. Fascinating predictions on the future. Only a few issues need to be raised, IMHO. :-)

One of those issues relates to the whole "secure backup and restore" or "secure download" of credentials. That's currently an unsolved problem. Fortunately, IETF has a working group - SACRED - addressing that issue. While SACRED started out a couple of years ago with a lot of interest and enthusiasm, interest seems to have waned. There are only a few people actively participating, and the group is about to wrap up with its solution. While I am others hope that the proposed solution will succeed and solve parts of the problem, it's not a complete solution (e.g., it addresses only device - credential server interactions; it doesn't support device - device operation). So consider this message a plea for others interested in this area to get involved in this work.

Until SACRED or some similar solution becomes widely implemented, smartcards - e.g., SIMs or WIMs in phones - aren't going away. The business model in many parts of the world relies on a large number of customers replacing their handsets frequently. In many areas I'm familiar with, it's common for a large set of people to replace their handsets every few months, so that they always have the latest, smallest, most colorful, "coolest" device. The whole notion of a smart card supports this model, by allowing the user to get a new handset and dispose of the old one while maintaining the same service WITHOUT INTERACTING with the mobile operator, because those interactions with the carbon-based life forms who staff the operator outlets represent one of the highest costs in the system. With a SIM/Smart card that can be moved from one handset to another, there's no need to involve any other carbon-based life form. Since there's now no way to securely move the credentials electronically that's widely interoperable (lots of us have our own, somewhat proprietary schemes), for most mobile operators/service providers the smart card remains the method of choice.

An automated - read as "efficient and low cost" way to securely register new devices into the system is also a requirement, and it's also currently an unsolved problem. Oh, there are other standards groups working on it, but I'm not sure they're going to have good solutions any time soon.

So it's nice to sit here and make statements that "this technology was just wrong, and 20 years from now everyone will acknowledge that it's obvious", but tell me Lynn, how many statements about ISO vs. TCP/IP did you make in 1983?

Al Arsenault
Diversinet Corp.


A challenge

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/14/2003 10:31 AM
To: Al Arsenault <awa1@xxxxxxxx>
cc: Anders Rundgren <anders.rundgren@xxxxxxxx>,
epay@xxxxxxxx, ietf-pkix@xxxxxxxx,
Torsten Hentschel <the@xxxxxxxx>
Subject: Re: A challenge
one of the factors driving 7816 standardization/commoditization in the '80s was that 7816 was a portable computing device ... but was dependent upon somewhat ubiquitous deployment of (stationary) input/output stations.

the spread of integrated portable input/output technology (cellphones & pds) in the early '90s started to chip away at the original 7816 target market (i.e. portable computing devices w/o requirement for ubiquitous input/output stations.

further eroding this target market is the spread of numerous contractless/wireless technologies. a big part of the 7816 standardization was the requirement for physical interoperability between the portable 7816 devices and the input/output stations. contactless technologies enable arbitrary form-factors .... further chiping away at the 7816 market segment. one of the targets for iso 14443 is a current primary 7816 market segment, point-of-sale .... where the contact infrastructure is showing failures .... especially in high-traffic areas. contactless/wireless breaks the requirement for form-factor specific implementation (required by contact interoperability) and starts to allow interoperability with devices regardless of their form factor.

for the aads chip strawman
https://www.garlic.com/~lynn/x959.html#aadsstraw

in '98 timeframe, i had somewhat facetiously claimed that i would take a $500 milspec part and cost-reduce it by two-orders of magnitude while improving the integrity.

at an assurance panel in the TCPA track at the intel developer's conference, i claimed that the chip met all the function/requirements of the TPM (and was finalized much earlier) and was equivalent to the best of integrity from gov. agencies. One of the TCPA people in the audience commented that wasn't fair since I didn't have a committee of 200 people helping me. One of the people on the panel from a gov. TLA replied that it was possibly true except in the area of radiation hardening. other past tcpa related posts:
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#14 Challenge to TCPA/Palladium detractors
https://www.garlic.com/~lynn/aadsm12.htm#15 Challenge to TCPA/Palladium detractors
https://www.garlic.com/~lynn/aadsm12.htm#16 Feasability of Palladium / TCPA
https://www.garlic.com/~lynn/aadsm12.htm#17 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#63 Intertrust, Can Victor Shear Bring Down Microsoft?

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

A challenge

From: Lynn Wheeler
Date: Fri, 14 Feb 2003 11:12:15 -0700
To: epay@xxxxxxxx
Cc: "Anders Rundgren" <anders.rundgren@xxxxxxxx>, ietf-pkix@xxxxxxxx,
"Torsten Hentschel" <the@xxxxxxxx>
Subject: RE: A challenge
the point was offline from the registry

in login there is either

1) registry in the unit requiring logging in which contains a binding between some identity and the permission's (i.e. what things can log in and do).

2) or a purely ROM logging-in process that accepts any and all certificates signed by some prespecified CA .... and will perform whatever operations indicated by the certificate.

in previous posting outlining a logging in scenario where there is some local registry in the device .... that is also certificate based .... the local device registry has both the list of acceptable public key certificates as well as some list of acceptable identification information from one or more certificates. The person presents a certificate, an attempt is made to validate the CA signature from the list of valid CA public keys in the registry .... and then some identification information contained in the certificate (say a person's name) is used to look up a registry entry for "certificates" that are allowed to log in and possibly their mapping to permissions and authorization. The public key from the certificate can be used to validate a signed message .... either before or after the lookup to see if there is a userid registry entry that corresponds to some identification field in the certificate (say person's name).

My assertion is that in the case where there is registry of valid userids with a mapping to a value in the certificate .... the whole infrastructure is equivalent to "information indirection" paradigm provided by the certificate infrastructure.

Say I buy a mobile device .... and want to establish my "ownership" by registering my certificate in the device as the true owner. So in the device are:

1) table of CA public keys
2) certificate with my public key and maybe my name signed by one of the CAs
3) login message
4) my signature on the login message

I assert that the levels of indirection can be totally eliminated by instead replacing the table of CA public keys with a table of "owner" public keys .... and mapping the table of "owner" public keys directly to permissions.

The table of CA public keys are eliminated and replaced with a table of owner "public keys". The registry mapping some certificate field unique identifier to permissions is eliminated and instead the registry of permissions are mapped directly to owner "public keys". The information indirection provided by the CA PKI infrastructure and the certificates are eliminated.

The issue with regard to being offline from the registry is where there is no local registry which maps a certificate field to local device permissions. The local device has no access to the registry mapping certificate field to permissions ... and therefore must totally rely on information within the certificate as to what permissions are entitled.

This "offline" infrastructure has only a CA table of public keys in the device. Certificates are presented signed by any of the acceptable CAs. The device then extracts all permissions directly from the certificate because it doesn't have online (or otherwise) access to the registry of permissions for this device.

So my original assertion is that if the device has access (online or otherwise) to the registry of permissions for the device .... the indirection infrastructure provided by PKI certificate based operation can be collapsed to replacing the CA table of acceptable public keys with a separate registry of permissions to a single combined registry table containing the acceptable public keys and the associated permissions.

So looking at PKI from the reverse standpoint, a PKI is an extension of the single combined registry table to a two level structure (in its basic form). The simple door entry scenario previously described is a singled combined registry table .... where the "owner" is directly mapped to permissions. The permission "owner" is able to authorize agents or surrogates by way of certificates. An "owner" can directly enter by signing a message with their public key ... or they can authorize others to enter by signing certificates containing the surrogate/agent's public key. The door entry validates the surrogate certificate as coming from a valid owner ... and then validates the agent/surrogate's digital signature. The door entry system doesn't require any access to any registry containing any information regarding anything about the surrogate/agent.

The assertion is that anytime that the local environment needs a local registry regarding specific surrogate/agent operation .... then the certificates are a form of information indirection and likely redundant and superfluous .... since the surrogate/agent's public key can be directly added to the single permissions table. Logically permissions are a single table. For arbritrary implementation reasons the table of permissions for CA public keys .... may be implied permission of only signing certificates for delegating authority to surrogate/agents. However, the permission table may be expanded for CA public keys to the type of delegation that specific CA can perform.

However, as soon as there has to be a local decision with regard to a surrogate/agent permissions in a local registry directly accessible by the device .... then I claim that the levels of indirection provided by a certificate can be eliminated and the surrogate/agent's public key be directly registered.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

tjones@xxxxxxxx on 2/14/2003 9:31 wrote:

Note that the point of logon is that mobile computers need to work off-line.

So any logon proceedure needs to accommodate off-line as well as on-line.

As you point out, cert were designed in an off-line scenario. Seems to fit the problem nicely.

If we don't have smart cards with certs in them, we need to put the cert (binding) in every mobile device that would use that info.

That would assume that the mobile device was more secure than the smart card/cert combo, and had more current info.

My experience w/ mobile devices, including cell phones, leads me to believe that neither of these is true.

..tom


surrogate/agent addenda (long)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 02/14/2003 03:02 PM
To: epay@xxxxxxxx
cc: "Anders Rundgren" <anders.rundgren@xxxxxxxx>,
ietf-pkix@xxxxxxxx, "Torsten Hentschel" <the@xxxxxxxx>
Subject: surrogate/agent addenda (long)
I claim that I can model an operational environment in terms of entities and permissions (for simplification attributes are also modeled as a kind of permission).

Registry is defined to be the method that an operational environment uses to map entities to permissions. A registry may be a single operational thing or it may be made up multiple things both explicit and implicit.

Digital signatures and public keys are used to authenticate entities to the operational environment. The operational environment uses public keys in the registry to represent entities (since they are unique to the authentication of the entity). The mapping between public keys and permissions is a function of the registry entry.

CA/PKI further extends the kinds of permissions to "delagation". A CA entity/public key is given the permission of delegation.

A CA uses its own registry to provide a mapping between entities, public keys, and permissions. A CA then signs a stale, static copy of some of the fields in its registry and calls it a certificate.

An operational environment can authenticate delegated permissions from a signed certificate by using the public key of a CA entity from its own registry. Just for some convention I'll call delegation entities: surrogates/agents. Full blown PKI defines the possibility of a multi-level delegation as a trust hierarchy.

So my repeated assertion is that if the operational environment accesses a surrogate/agent registry entry (local, remote, online, etc) then the corresponding surrogate/agenty certificate can be shown to be redundant and superfluous.

So, I've managed to explain this redundant and superfluous characteristic in a number of different ways:

1) a registry entry can contain anything that a certificate can contain, therefor all fields that can occur in a certificate can also be placed in a surrogate/agent registry entry. Also, for every field that can exist in a certificate can be placed in a registry entry. When all fields from a certificate are placed in a registry entry, the certificate can be compressed to zero bytes. These certificates aren't actually redundant and superfluous, they are just zero bytes.

2) the creation of a registry entry for a surrogate/agent is equivalent to a registration authority business process. anything that is done during a CA's RA process can be done during the creation of a registry entry for the surrogate/agent. If necessary, this can be viewed as caching the certificate in the surrogate/agent registry entry. Then while the certificate isn't strictly redundant and superfluous, any transmission of the certificate is redundant and superfluous since the operational environment already has access to the cached entry. This is especially useful when the inclusion of a certificate in a transmission represents a painful bloating of the total transmission size.

3) when there is a directly accessible registry entry (local, remote, online, etc) for a surrogate/agent, then the certificate may just represent information indirection .... somewhat like DNS system mapping of hostname to ip-address. the registry entry contains some value that can be matched to a field in the certificate. This provides a indirection mapping between the permissions in the registry entry and the entity's public key in the certificate (authenticated & deligated by the CA signatures and the CA entry in a directly accessible registry entry). However, since any value that can occur in a certificate can also be placed in the registry entry, the public key from certificate entry can be directly placed in the surrogate/agent registry entry, eliminating the indirection provided by the certificate and making the use of the certificate redundant and superfluous.

4) an operational environment may require direct access to a surrogate/agent registry entry (local, remote, online, etc) because of permissions expressed in terms of aggregation information (information that is maintained in the registry entry that represents information aggregation and is a difficult paradigm for a stale, static certificate) or permissions expressed in terms of real-time or near real-time information (again difficult paradigm for a stale, static certificate).

5) If a stale, static certificate provides an identifier that is used to index a surrogate/agent registry entry (local, remote, online, etc) then it is also possible to include the same identifier as part of the message digitally signed by the surrogate/agent. As before, such a surrogate/agent registry entry can include anything that a certificate can include, including the public key of the surrogate/agent. By directly placing the public key in the surrogate/agent registry entry, and directly accessing the surrogate/agent registry entry using an identifier from the signed message, then the signed message can be authenticated using the public key from the directly accessed surrogate/agent registry entry. The directly accessed surrogate/agent registry entry can contain any other field that might exist in the certificate, including all possible permission values. Since the digital signed message can be authenticated without having to resort to the certificate and since the directly accessed surrogate/agent registry entry can contain any field that a certificate can contain, the stale, static certificate transmitted as part of the message is redundant and superfluous.

The simplest scenario for a stale, static certificate is in an operational environment that only accesses a CA delegation registry entries and permissions are solely established on the basis of the contents of the CA registry entry and the contents of the certificate. This is the employee certificate that is used for door entry. The door entry operational environment only contains the public key of one or more CAs. The door is opened purely based on being able to validate the CA's signature on the certificate and then validate the employee's signature using the public key in the certificate. For opening the door, there is no recourse to any employee specific entry. This is the typical "offline" door operation typical of low value situations. Higher value door operations tend to be "online" in the sense that they directly access employee specific registry information. Such online permission entry can be based on timely information (like has the employee been fired within the past couple hours) or aggregation (what other things has the employee done in the recent past). It is in the accessing of the employee registry entry (local, remote, online, etc) for timely and/or aggregated information that makes the use of the stale, static certificate redundant and superfluous.

Similarly a portable device can either be "offline" in the sense that its operation is totally determined from just a CA public key registry and a stale, static certificate (and doesn't have direct access to the surrogate/agent registry entry). Lets say there are large number of "public" portable device and anybody can utilize any device so long as they have a valid certificate for device usage. However, a certificate can become redundant and superfluous if the device has a surrogate/agent specific registry entry that it references (since it is possible to register any information that might be found in a certificate, in a surrogate/agent registry entry). A portable device that might contain a surrogate/agent specific registry entry might be a owner-specific paradigm (i.e. the surrogate/agent registry entry in the device corresponds to the owner). A onwer-specific paradigm could be implemented with certificates if the certificate contained the device specific identifier. The device would only work when a certificate was presented that contained the device-specific identifier in one of the certificate fields.

Lets say we are looking at a hardware token implementation as keys for automobiles. For an owner specific paradigm there can be a certificate/PKI based implementation or a certificate-less based implementation. In the PKI-based implementation the automobile's internal permission registry table contains one or more CA public keys. The car starts for any hardware token that has a digital signed certificate that specifies the specific automobile serial number and authenticated by any of the CA public keys (and of course the hardware token signs a valid message that is authenticated from the public key in the certificate). The certificate-less implementation replaces the CA public keys in the automobile's internal permission registry table with public keys of the owner's hardware tokens. Then only the directly listed hardware tokens are able to start the automobile. In the certificate-less scheme, there would be a administration mode used in conjunction with a valid hardware token to add/delete public keys in the automobile's registry.

In the PKI-based implementation for "owned" automobile operation, it becomes somewhat more problamatical to invalidate hardware tokens (aka certificates) for the automobile. One method would be to create registry entries for invalidated certificates (aka public key/hardware token). However, I would assert as soon as surrogate/agent specific entries are required for invalidated public keys .... then the infrastructure has been established making certificate-less operation KISS (aka stale, static certificates being redundant and superfluous). Just create a positive list/registry of public keys .... possibly totally eliminating the delegation permission associated with CA public keys. The automobile registry than has permissions like administrative mode, start automobile, open trunk, open doors, etc for each public key (aka hardware token).

It is problamatical whether CA public keys would be retained in the automobile registry solely associated with manufacture and service operation. Standard automobile operation would be straight certificate-less with the owner's hardware tokens being directly registered in the automobile's registry table. However, there might be non-ownership related permissions like various service operations. There could be a manufacture CA delegation permissions associated with service operation and independent of automobile serial/identification. The manufacture's public key would be in the automobile's table with delegation permissions. The manufacture would sign public key certificates for service operations, enabling service specific permissions for all cars from the manufacture.

However, CRLs really become a nightmare in this situation. I would contend that a much KISS solution is that the service operation goes online with the manufacture (in much the same way that POS terminal do online transaction) and that the automobile authenticates the service operation by accessing the manufacture's online database. The manufacture's public key is still in the automobile registry .... not for purposes of validatting certificates but for purposes of validating online communcation with the manufacture for authentication of real-time service organization delegation. As a result, the operation is online with direct access to registry entries for service organizations and therefor service organization certificates are redundant and superfluous.

past references regarding compression of certificates to zero bytes:
https://www.garlic.com/~lynn/aepay3.htm#aadsrel1 AADS related information
https://www.garlic.com/~lynn/aepay3.htm#aadsrel2 AADS related information ... summary
https://www.garlic.com/~lynn/aepay3.htm#x959discus X9.59 discussions at X9A & X9F
https://www.garlic.com/~lynn/aadsmore.htm#client4 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss1 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss6 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm4.htm#6 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
https://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
https://www.garlic.com/~lynn/aadsm12.htm#64 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#76 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/2000b.html#93 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000e.html#41 Why trust root CAs ?
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates

past references to methodology of registry operation caching certificate kind of information (making certificate transmission redundant and superfluous)
https://www.garlic.com/~lynn/aepay3.htm#x959discus X9.59 discussions at X9A & X9F
https://www.garlic.com/~lynn/aepay6.htm#dspki5 use of digital signatures and PKI (addenda)
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm4.htm#8 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm8.htm#softpki16 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm8.htm#softpki19 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm8.htm#softpki20 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm9.htm#cfppki4 CFP: PKI research workshop
https://www.garlic.com/~lynn/aepay10.htm#33 pk-init draft (not yet a RFC)
https://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm13.htm#7 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/2000.html#37 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#43 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#46 Can I create my own SSL key?
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002e.html#56 PKI and Relying Parties
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002m.html#17 A new e-commerce security proposal
https://www.garlic.com/~lynn/2003.html#52 SSL & Man In the Middle Attack

past discussions of stale, static certificate issues vis-a-vis aggregated/timely information:
https://www.garlic.com/~lynn/aadsmail.htm#vbank Statistical Attack Against Virtual Banks (fwd)
https://www.garlic.com/~lynn/aadsmore.htm#hcrl3 Huge CRLs
https://www.garlic.com/~lynn/aadsmore.htm#client2 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#client4 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsm2.htm#arch4 A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm2.htm#availability A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm2.htm#mauthauth Human Nature
https://www.garlic.com/~lynn/aadsm2.htm#pkikrb PKI/KRB
https://www.garlic.com/~lynn/aadsm4.htm#01 redundant and superfluous (addenda)
https://www.garlic.com/~lynn/aadsm4.htm#0 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#5 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm5.htm#pkimort PKI: Evolve or Die
https://www.garlic.com/~lynn/aadsm7.htm#rhose10 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki2 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki11 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki14 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm9.htm#softpki23 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#softpki24 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#cfppki8 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aepay3.htm#openclose open CADS and closed AADS
https://www.garlic.com/~lynn/aepay4.htm#comcert16 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#x9flb12 LB#12 Protection Profiles
https://www.garlic.com/~lynn/aepay6.htm#crlwork do CRL's actually work?
https://www.garlic.com/~lynn/aepay6.htm#dspki3 use of digital signatures and PKI (addenda)
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm11.htm#40 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm11.htm#42 ALARMED ... Only Mostly Dead ... RIP PKI ... part III
https://www.garlic.com/~lynn/aadsm12.htm#20 draft-ietf-pkix-warranty-ext-01
https://www.garlic.com/~lynn/aadsm12.htm#26 I-D ACTION:draft-ietf-pkix-usergroup-01.txt
https://www.garlic.com/~lynn/aadsm12.htm#27 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#29 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#32 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#33 two questions about spki
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#54 TTPs & AADS Was: First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#55 TTPs & AADS (part II)
https://www.garlic.com/~lynn/aadsm13.htm#0 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#1 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#2 OCSP value proposition
https://www.garlic.com/~lynn/aadsm13.htm#3 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#4 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#6 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#7 OCSP and LDAP
https://www.garlic.com/~lynn/aepay10.htm#31 some certification & authentication landscape summary from recent threads
https://www.garlic.com/~lynn/aepay10.htm#34 some certification & authentication landscape summary from recent threads
https://www.garlic.com/~lynn/aepay10.htm#37 landscape & p-cards
https://www.garlic.com/~lynn/aepay10.htm#75 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/98.html#0 Account Authority Digital Signature model
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/2000.html#37 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2000.html#42 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2000b.html#92 Question regarding authentication implementation
https://www.garlic.com/~lynn/2001.html#67 future trends in asymmetric cryptography
https://www.garlic.com/~lynn/2001c.html#8 Server authentication
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001d.html#8 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#46 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001g.html#68 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#3 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2001m.html#41 Solutions to Man in the Middle attacks?
https://www.garlic.com/~lynn/2002e.html#56 PKI and Relying Parties
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#40 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#56 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002o.html#57 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002p.html#11 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#21 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#22 Cirtificate Authorities 'CAs', how curruptable are they to

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

A challenge

Refed: **, - **, - **
From: Lynn Wheeler
Date: 02/16/2003 05:56 AM
To: Al Arsenault <awa1@xxxxxxxx>
cc: Anders Rundgren <anders.rundgren@xxxxxxxx>, epay@xxxxxxxx,
ietf-pkix@xxxxxxxx, owner-ietf-pkix@xxxxxxxx,
    Torsten Hentschel <the@xxxxxxxx>
Subject: Re: A challenge
somewhat addenda to question about '83 OSI .... while iso/x3s3.3 couldn't have a work item for standard that violated OSI (i.e. HSP, high-speed protocol .... direct from level4/transport to LAN/MAC interface which sits in the middle of level3/network) .... x3s3.3 could have a work item that studied HSP.

note also that IP also violated OSI ... since it is a layer that sits between level4/transport layer and level3/network layer .... the inter-networking layer. however, IETF doesn't have any qualms about having IP interface directly to LAN/MAC.

as before ... internet archeological references from 82/83:
https://www.garlic.com/~lynn/rfcietf.htm#history 20th anniversary of the internet!
https://www.garlic.com/~lynn/2002p.html#38 20th anniversary of the internet
https://www.garlic.com/~lynn/2002p.html#39 20th anniversary of the internet
https://www.garlic.com/~lynn/2002q.html#22 20th anniversary of the internet

and a little more thread drift, a recent archeological reference to something else that i was doing about that time
https://www.garlic.com/~lynn/2003c.html#75 The relational model and relational algebra - why did SQL become the industry standard?

and the whole thing about thread between loosely-coupled, sysplex, high availability, cluster, supercomputers, and electronic commerce:
https://www.garlic.com/~lynn/2001i.html#52

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Encryption of data in smart cards

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Wed, 12 Mar 2003 11:50:06 -0700
To: raghu@xxxxxxxx
Subject: Re: Encryption of data in smart cards
Cc: cryptography@xxxxxxxx
At 10:39 AM 3/11/2003 +0530, N. Raghavendra wrote:
Can anyone point me to sources about encryption of data in smart cards. What I am looking for is protocols for encrypting sensitive data (e.g., medical information about the card-holder), so that even if the card falls into malicious hands, it won't be easy to read that data.

a lot of cards use derived (symmetric) keys ... similar to the derived key per transaction X9 standards. they are used to protect data from outside examination and in multi-function cards to provide protection domains between the different applications on a card.

typically there is a system wide key that you would find in a secure terminal (like transit systems) that read data, decrypt it, update it, re-encrypt it and write it back to the card. this handles situations involving attacks with fraudulent readers that load fraudulent value on the card. given the possibility of a brute force attack on the infrastructure (aka getting the data out of one card, and finding the master system key) ... many systems go to some form of derived keys. They typically amount to one-way function that combines the system-wide key with something like an account number from the card that results in the derived key. A brute force attack on the card data .... will only result in obtaining the card-specific, derived key .... and not the system-wide master key. All secured readers, knowing the system wide key and some card identification can always calculate the derived key for a card.

misc. derived key stuff ...
https://www.garlic.com/~lynn/aadsm3.htm#cstech8 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aepay10.htm#33 pk-init draft (not yet a RFC)
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002f.html#22 Biometric Encryption: the solution for network intruders?

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Certificate Policies (was Re: Trivial PKI Question)

From: Lynn Wheeler
Date: 03/13/2003 10:30 AM
To: Al Arsenault <awa1@xxxxxxxx>
cc: ietf-pkix@xxxxxxxx, Margus Freudenthal <margus@xxxxxxxx>
Subject: Re: Certificate Policies (was Re: Trivial PKI Question)
a similar argument was used with regard to plastics cards turned smartcards .... however the most common failure was lost/stolen wallet containing all such cards. there was absolutely no difference with management issues whether there was a signle card or multiple cards .... for the most common failure/exploit.

postulated was that the next most common failure (fraud, exploit, availability, etc) might be hardware failure. however, the hardware failure scenarios statistics didn't mandate a one-for-one mapping between token & function .... it would just indicate that some might want no-single-point-of-failure (two tokens, or at most three).

I would assert that if you need/require multiple certificates .... that there is effectively nearly the same management process ... regardless of whether a single key or multiple keys are involved. You have to a mapping between key to which certificate .... whether it is one-to-one or one-to-many .... and you have to have a notification process for each certificate. The issue then isn't the management of the information .... it is just how many that might have to be notified for each key compromise ... not the management problem of keeping track of all that might have to be notified. It is slightly different if the information hasn't been managed and it is necessary to reconstruct it after a compromise ... then if there is only a one-to-one mapping .... then the scope of reconstruction may not be as bad ... since the search for key-to-certificate mapping stops after the correct certificate has been identified ... and the search for notification process stops ... after the process for the specific certificate is found.

issue with regard multiple or single key compromise .... would be if the compromise modes have common components. For instance are all private keys carried in the same hardware token or same encrypted file. If the most common compromise/failure mode for keys .... are common multiple key failure ... aka attack on encrypted file containing all private keys .... then all certificates have to be revoked .... regardless of whether there is a one-to-one policy or a one-to-many policy is allowed (aka similar to the most common failure mode for cards .... the lost/stolen of wallet/purse ... where all cards are taken .... and there is no differentiation in this failure mode whether there were single card or multiple cards .... all cards fail). semantic note: common is used in the sense of most frequent ... as well as the sense of affecting all keys.

I would also strongly assert that many policies are left over from shared-secret key policies .... each infrastructure requiring a unique key because of vulnerabilities specifically associated with shared-secret key.

I would contend that many of the vulnerabilities significantly changed transitioning from shared-secret key infrastructure to public key infrastructure .... and the vulnerability thresholds needed for various organizations would still be met if the same public key were used in different infrastructures ... aka many infrastructures never bothered really redoing failure/vulnerability analysis. Possibly in the transition from shared-secret to public; some bureaucrat just says that there is a policy regarding keys .... and of course all keys are the same. Bureaucratic policies have a life of their own.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Al Arsenault on 3/13/2003 7:40 am wrote:
><snip>
> When using multiple CA-s, what prevents you from issuing multiple
> certificates to the same key?

From a technical standpoint, typically nothing prevents this. It's not commonly done because:

a. There's more of a management problem; e.g., if the key is ever compromised for whatever reason, you have to track down ALL of the certificates it was bound to and revoke them; and

b. Policies typically restrict it.

But it could easily be done (and has in some specialized cases).

Al Arsenault
Chief Security Architect
Diversinet Corp.


Encryption of data in smart cards

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Thu, 13 Mar 2003 14:08:04 -0700
To: John Kelsey <kelsey.j@xxxxxxxx>
Subject: Re: Encryption of data in smart cards
Cc: Krister Walfridsson <cato@xxxxxxxx>,
Werner Koch <wk@xxxxxxxx>, cryptography@xxxxxxxx
At 01:13 PM 3/13/2003 -0500, John Kelsey wrote:
At 11:08 PM 3/12/03 +0100, Krister Walfridsson wrote:
>...
>This is not completely true -- I have seen some high-end cards that use
>the PIN code entered by the user as the encryption key. And it is quite
>easy to do similar things on Java cards...

With any kind of reasonable PIN length, though, this isn't all that helpful, because of the small set of possible PINs. And smartcards don't generally have a lot of processing power, so making the PIN->key mapping expensive doesn't help much, either.

/Krister

--John Kelsey, kelsey.j@xxxxxxxx


note however, that PIN could be possibly in infrastructure with real secret key and encryption done with derived key. the derived key one-way function is attempting to protect the infrastructure-wide secret key from brute force key search on specific piece of data. The issue is how many bits in a PIN is required to protect the secret key in a one-way function (involving the secret key and the PIN). A simple derived key is sufficient using the secret key and public account number. Adding a (privately known, card specific) PIN to such a derived key function:

1) doesn't increase the ease of attack on the secret key

2) doesn't affect brute force attack on the derived key

3) makes it harder to use a lost/stolen card

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

Certificate Policies (addenda)

Refed: **, - **, - **, - **
From: Lynn Wheeler
Date: 03/13/2003 02:41 PM
To: ietf-pkix@xxxxxxxx
cc: Al Arsenault <awa1@xxxxxxxx>, Margus Freudenthal <margus@xxxxxxxx>
Subject: Re: Certificate Policies (addenda)
note something related was discussed in sci.crypt regarding certification of quality:
https://www.garlic.com/~lynn/2003d.html#71 SSL/TLS DHE suites and hsort exponents

aka CA basically certifies the validity of some assertion in the certificate. there has been little or no activity in the area of quality. One is tempted to mention the joke in risks forum this week about the person lost in a ballon
http://catless.ncl.ac.uk/Risks/22.63.html

we had been somewhat involved in the most prevalent certification in the world today ... aka SSL domain name certificates for e-commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

at the time, included having to perform due diligence visits on the major certification players for SSL domain name certificates for e-commerce.

we strived to get some quality issues introduced into the certification process with no success.

a significant issue is/was that certificates are primarily a pardigm for offline, stale, static data. Risk and trust management has been moving the state-of-the-art to a timely, dynamic data paradigm .... and it is trivially shown that any environment that supports timely, dynamic data paradigm ... also supports stale, static data as a subset. It wasn't so much that there weren't any players in the risk & trust management arena .... is was that they had just about all moved into a timely, dynamic data paradigm. While it is possible to proove that a infrastructure that involves timely, dynamic data .... can support as a subset all the characteristics of stale, static data .... it is not possible to proove that an offline, stale, static paradigm subsumes timely, dynamic data .... aka in a paradigm with timely, dynamic data it is trivial to show that offline, stale, static certificates are redundant and superfluous.

By comparison the certification authorities are just looking to certify some assertions regarding stale, static data (usely by checking with some totally different organization that is actually responsible for the accuracy of the assertions).

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sat, 15 Mar 2003 16:15:22 -0700
To: Ian Grigg <iang@xxxxxxxx>
Subject: Re: How effective is open source crypto?
Cc: cryptography@xxxxxxxx
having worked on some of the early e-commerce/certificate stuff ... recent ref:
https://www.garlic.com/~lynn/aadsm13.htm#25 Certificate Policies (addenda)

the assertion is that basic ssl domain name certificate is so that the browser can check the domain name from the url typed in against the domain name from the presented (trusted) certificate ... and have some confidence that the browser is really talking to the server that it thinks it is talking to (based on some trust in the issuing certification authority). in that context ... self-certification is somewhat superfluous ... if you trust the site to be who they claim to be ... then you shouldn't even have to bother to check. that eliminates having to have a certificate at all ... just transmit a public key

so slight step up from MITM attacks with self-signed certificates would be to register your public key at the same time you register the domain. browsers get the server's public key from dns at the same time it gets the ip-address (dns already supports binding of generalized information to domain ... more than simple ip-address). this is my long, repetitive argument about ssl domain name certification ....
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

i believe a lot of the non-commercial sites have forgone SSL certificates .... because of the cost and bother.

some number of the commercial sites that utilize SSL certificates .... only do it as part of financial transaction (and lots of them .... when it is time to "check-out" .... actually transfer to a 3rd party service site that specializes in SSL encruyption and payments). The claim by many for some time .... is that given the same exact hardware .... they can do 5-6 times as many non-SSL (non-encrypted) HTTP transactions as they can do SSL (encrypted) HTTPS transactions .... aka they claim 80 to 90 percent hit to the number of transactions that can be done switching from HTTP to HTTPS.

a short version of the SSL server domain name certificate is worry about attacks on the domain name infrastructure that can route somebody to a different server. so SSL certificate is checked against to see if the browser is likely talking to the server they think they are talking to. the problem is that if somebody applies for a SSL server domain name certificate .... the CA (certification authority) has to check with the authoritative agency for domain names .... to validate the applicants domain name ownership. The authoritative agency for domain names is the domain name infrastructure that has all the integrity concerns giving rise for the need for SSL domain name certificates. So there is a proposal for improving the integrity of the domain name infrastructure (in part backed by the CA industry ... since the CA industry is dependent on the integrity of the domain name infrastructure for the integrity of the certificate of the certificates) which includes somebody registering a public key at the same time at a domain name. So we are in catch-22 ....

1) improving the overall integrity of the domain name infrastructure mitigates a lot of the justification for having SSL domain name certificates (sort of a catch-22 for the CA industry).

2) registering a public key at the same time as domain name infrastructure ... implies that the public key can be served up from the domain name infrastructure (at the same time as the ip-address .... eliminating all need for certificates).

There is a description of doing an SSL transaction in single round trip. The browser contacts the domain name system and gets back in single transmission the 1) public key, 2) preferred server SSL parameters, 3) ip-address. The browser selects the SSL parameters, generates a random secret key, encrypts the HTTP request with the random secret key, encrypts the random secret key with the public key ... and sends off the whole thing in a single transmission .... eliminating all of the SSL protocol back&forth setup chatter. The browser had to contact the domain name system in any case to get the ip-address .... the change allows the browser to get back the rest of the information in the same transmission.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 10:17:39 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto?
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
    Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 08:40 AM 3/16/2003 -0800, Eric Rescorla wrote: You still need a round trip in order to prevent replay attacks. The fastest that things can be while still preserving the security properties of TLS is:

ClientHello       ->
ClientKeyExchange ->
Finished          ->
<-  ServerHello
<-  Finished
Data              ->

See Boneh and Schacham's "Fast-Track SSL" paper in Proc.ISOC NDSS 2002 for a description of a scheme where the client caches the server's parameters for future use, which is essentially isomorphic to having the keys in the DNS as far as the SSL portion goes.

In any case, the optimization you describe provides almost no performance improvement for the server because the load on the server derives almost entirely from the cryptography, not from transmitting the ServerHello [0]. What it does is provide reduced latency, but this is only of interest to the client, not the server, and really only matters on very constrained links.

-Ekr

[0] With the exception of the ephemeral modes, but they're simply impossible in the scheme you describe.


Sorry, there were two pieces being discussed.

The part about SSL being a burden/load on servers ....

and the shorten SSL description taken from another discussion. The shorten SSL description was (in fact) from a discussion of the round-trips and latency ... not particularly burden on the server. In the original discussion there was mention about HTTP requires TCP setup/teardown which is minimum seven packet exchange .... and any HTTPS chatter is in addition to that. VMTP, from rfc1045 is minimum five packet exchange, and XTP is minimum three packet exchange. A cached/dns SSL is still minimum seven packet exchange done over TCP (although XTP would reduce that to three packet exchange).

So what kind of replay attack is there. Looking at purely e-commerce ... there is no client authentication. Also, since the client always chooses a new, random key .... there is no replay attack on the client ... since the client always sends something new (random key) every time. That just leaves replay attacks on the server (repeatedly sending the same encrypted data).

As follow-up to doing the original e-commerce stuff ... we then went on to look at existing vulnerabilities and solutions .... and (at least) the payment system has other methods already in place with regard to getting duplicate transaction .... aka standards body for all payments (credit, debit, stored-value, etc) in all (electronic) environments (internet, point-of-sale, self-serve, face-to-face, etc), X9.59
https://www.garlic.com/~lynn/x959.html#x959 (standard)
https://www.garlic.com/~lynn/x959.html#aadsnacha (debit/atm network pilot)

Replay for simple information retrieval isn't particularly serious except as DOS .... but serious DOS can be done whether flooding is done with encrypted packets or non-encrypted packets. Another replay attack is transaction based ... where each transaction represents something like performing real world transaction (send a shirt and debit account). If it actually involves payment ... the payment infrastructure has provisions in place to handle repeat/replay and will reject. So primarily what is left .... are simple transaction oriented infrastructures that don't have their own mechanism for detecting replay/repeats and are relying on SSL.

I would also contend that this is significantly smaller exposure than self-signed certificates.

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (addenda)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 10:40:52 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (addenda)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
    Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
... small side-note .... part of the x9.59 work for all payments in all environments .... was that the transaction system needed to be resilient to repeats and be done in a single round-trip (as opposed to the transport).

there needed to be transaction resiliency with respect to single round trip with something like email that might not happen in strictly real-time (extremely long round-trip delays).

Real-world systems have been known to have glitches ... order/transaction generation that accidentally repeats (regardless of whether or not transport is catching replay attacks).

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 11:26:59 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 09:30 AM 3/16/2003 -0800, Eric Rescorla wrote:
Correct.

It's considered bad form to design systems which have known replay attacks when it's just as easy to design systems which don't. If there were some overriding reason why it was impractical to mount a defense, then it might be worth living with a replay attack. However, since it would have only a very minimal effect on offered load to the network and--in most cases--only a marginal effect on latency, it's not worth doing.

-Ekr

--
[Eric Rescorla ekr@xxxxxxxx]
http://www.rtfm.com/


so, lets look at the alternatives for servers that are worried about server replay attacks:

client has public key & crypto-preferred info (dns or cached), generates random secret key, encrypts request, encrypts random secret key, single transmission

server gets request ... application has opened the connection with or w/o server replay attack. if the application, higher level protocol has its own repeat checking .... it has opened the connection w/o server replay attack. and the server sends the request up the stack to the application. If the application has opened the connection with server replay attack, the protocol sends back some random data (aka its own secret)... that happens to be encrypted with the random key.

The client is expecting either the actual response or the replay attack check. If the client gets the actual response, everything is done. If the clients gets back the replay attack check .... it combines it with something .... and returns to the server.

The difference is basic two packet exchange (within setup/teardown packet exchange overhead) plus an additional replay prevention two packet exchange (if the higher level protocol doesn't have its own repeat handling protocol). The decision as to whether it is two packet exchange or four packet exchange is not made by client ... nor the server ... but by the server application.

Simple example for e-commerce is sending a P.O. along with payment authorization ... the transmitted P.O. form is guaranteed to have a unique identifier. The P.O. processing system has logic for handling repeat POs ... for numerous reasons (not limited to replay attacks).

Single round-trip transaction:
ClientHello/Trans->
<- ServerResponse/Finish
Transaction w/replay challenge:
ClientHello/Trans->
<-Server replay challenge
ClientResp->
<-ServerResponse/Finish
Now, ClientHello/Trans can indicate whether the client is expecting a single round-trip or additional data.

Also, the ServerResponse can indicate whether it is a piggy-backed finish or not.

So, the vulnerability analysis is what is the object of the replay attack and what needs to be protected. I would contend that the object of the replay attack isn't directly the protocol, server, or the system .... but the specific server application. Problem of course, is that with generic webserver (making the connection) there might be a couple levels of indirection between the webserver specifying the connection parameters and the actual server application (leading to webservers always specifying replay challenge option).

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (aads addenda)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 13:48:02 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (aads addenda)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
    Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
we did something similar for AADS PPP Radius
https://www.garlic.com/~lynn/x959.html#aads

radius digital signature protocol has replay challenge.

so adding radius option to webserver client authentication stub (infrastructure can share common administration client authentication across all of its environments). then client clicks on https client authentication, generates secret random key, encrypts request for client authentication with random key, encrypts random key with server public key, sends off single transmission. Server responds with radius connect request .... which includes replay challenge value as part of message (encrypted with random key). Client responds with digital signature on the server radius message (and some of its own data, encrypted with random key).

Basically use the same packet sequence as in transaction w/o replay challenge ... since higher level protocol contains replay challenge. Then can use same packet sequence for webserver TLS and encrypted PPP (and works as VPN; possibly can define also as encrypted TCP) .... along with the same client authentication infrastructure

Infrastructure can use the same administration (RADIUS) infrastructure for all client authentication .... say enterprise with both extranet connections as well as webserver .... or ISP that also supplies webhosting. The same administrative operation can be used to support client authentication at the PPP level as well as at the webserver level.

The same packet exchange sequence is used for both PPP level encryption with client authentication as well as TLS for webserver level encryption with client authentication.

The higher level application can decide whether it already has sufficient replay/repeat resistance or request replay/repeat resistance from lower level protocol.

So regardless of TLS, PPP, or TCP, client authentication (using same packet sequence as transaction, w/o lower level replay challenge):

1) client picks up server public key and encryption options (from cache or DNS)

2) client sends off radius client authentication request, encrypted with random secret key, encrypted with server public key ...

3) server lower level protocol handles the decryption of the random secret key and the decryption of the client request (which happens to be radius client authentication .... but could be any other kind of transaction request) and passes up the decrypted client request

4) server higher level protocol (radius client authentication) responds with radius replay challenge

5) client gets the replay challenge, adds some stuff, digitally signs it and responds

6) server higher level radius client authentication protocol appropriately processes


Same server public key initial connect code works at TLS, PPP, and possibly TCP protocol levels. The same server public key initial connect code supports both lower-level replay challenge and no replay challenge.

Same radius client authentication works at TLS, PPP, and possibly TCP protocol levels. Same client administrative processes works across the whole environment.

aka .... the radius client authentication protocol is just another example (like the purchasse order example) of the higher level protocol having its own replay/repeat handling infrastructure (whether it is something like log checking or its own replay challenge).

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 15:21:56 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
        Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
Eric Rescorla wrote:
You've already missed the point. SSL/TLS is a generic security protocol. As such, the idea is to push all the security into the protocol layer where possible. Since, as I noted, the performance improvement achieved by not doing so is minimal, it's better to just have replay protection here.

-Ekr

--
[Eric Rescorla ekr@xxxxxxxx]
http://www.rtfm.com/


well i was looking it as a protocol specification analysis .... not just as an SSL/TLS solution. the other common reaction has been it isn't necessary to consider the optimal transaction case (which may be significant percentage of the cases) because the underlying TCP is already doing a minimum 7-packet exchange. And then the response is well we don't need no stinking, optimized, reliable transport ... because the upper layers have such horrible packet chatter.

so traditionally from a protocol specification analysis standpoint .... it is bad to jumble/mix-up objectives. objectives should be clearly delineated. nominally SSL is 1) server authentication, 2) encrypted channel, 3) replay attack .... all mushed together.

so I would claim that I was clearly delineating and calling out the separate characteristics .... as well as doing a protocol analysis that could be common generic security solution (code & packet exchange) ... whether at the TLS level, the TCP level, or the PPP level.

https://www.garlic.com/~lynn/aadsm13.htm#30 How effective is open source crypto? (aads addenda)

We ran into some of this on the original SSL e-commerce activity (on the backend for the original payment gateway).... needing to deploy symmetrical authentication .... even tho the specification and code didn't exist:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 16:06:44 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
    Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 02:40 PM 3/16/2003 -0800, Eric Rescorla wrote:
analysis of SSL performance makes pretty clear that's not the relevant issue 99% of the time. And since you propose to impose a significant dependency on DNS in order to fix that non-problem, that analysis is particularly relevant.

Premature optimization is a bad thing.


nope .... actually i was pointing out that the server authentication part of SSL .... which is somewhat where the original comment about self-signed certificates came in (in the original post starting this thread) .... was in response to integrity concerns with the domain name infrastructure.

however, the authoritative agency for domain name ownership is the domain name infrastructure. Typically when somembody submits a request to a certification authority for a SSL domain name certificate, the certification authority validates the request by contacting the domain name infrastructure to validate that the person requesting the SSL domain name certificate owns the domain name (aka that is how certification is done .... a certification authority validates some assertion that goes into a certificate with the authoritative agency responsible for the validity of the information).

so the CA industry has also somewhat noted that there may be integrity issues with the domain name infrastructure .... and one of the suggestions is to have a public key registered as part of registering the domain name .... in order to improve the integrity of the domain name infrastructure .... so that the CA industry can rely on the integrity of the domain name infrastructure when certifying and issuing SSL domain name certificates.

So a simple observation is that

1) if the CA industry wants public keys registered as part of domain name registration as part of improving the integrity of the domain name infrastructure for use by the CA industry

2) improvements in the integrity of the domain name infrastructure for the CA industry .... actually improves the integrity of the domain name infrastructure for everybody.

3) if the integrity of the domain name infrastructure is improved for everybody .... the concerns about the integrity of the domain name industry giving rise to the requirement for SSL domain name certificates are reduced.

4) if the concerns needing SSL domain name certificates are reduced significantly then it is hard to financially justify a trusted SSL domain name certificate infrastructure

5) if the financial justification for a trusted SSL domain name certificate infrastructure is eliminated ... then how do you address the other facet of SSL which uses the certficate public key as part of random secret key exchange

6) so there now needs to be some trusted public key distribution that doesn't rely on trusted SSL domain name certificate infrastructure

7) well if there is now public key registration as part of domain name registration, and the integrity issues of the domain name infrastructure with regard to trusted information are supposedly addressed; then would it be possible to use a trusted domain name infrastructure information distribution for public key distribution also

8) if you were using DNS for trusted information distribution (ip-address, public key, server characteristics) what other opportunities are there for enhancing the establishment of encrypted information exchange.


This didn't start from the standpoint of optimizing SSL.

This started from the standpoint that some facets of the CA industry has been advocating addressing some of the issues that give rise to needing SSL domain name certificates. There is this chicken&egg or catch22 for the CA industry with respect to SSL domain name certificates.

1) There has to be enough integrity concerns regarding the domain name infrastructure to establish the financial justification for doing the whole SSL domain name certificate certification process.

2) However, the CA industry is dependent on the integrity of the domain name infrastructure in order to perform the certification process for trusted SSL domain name certificate

3) If the CA industry sowed the seeds for eliminating the financial basis for trusted SSL domain name certificates (eliminating people worrying about which server am i really talking to; aka the server authentication) .... how is trusted public key distribution accomplished so that the encrypted channel part of SSL is still accomplished.

4) if the domain name infrastructure was now trusted for information distribution and the domain name infrastructure had public keys registered (as per the CA industry suggestion).... then could the domain name infrastructure be used for trusted public key distribution.

5) if the domain name infrastructure was used for trusted public key (and other information) distribution what other changes could be made in the protocol.


--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Sun, 16 Mar 2003 16:52:08 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 03:21 PM 3/16/2003 -0800, Eric Rescorla wrote:
Yes, you've described this change before. And as I've said before, it's a very marginal improvement in protocol performance with the massive downside that it introduces a dependency on the (nonexistent) secure DNS infrastructure.

-Ekr

--
[Eric Rescorla ekr@xxxxxxxx]
http://www.rtfm.com/


however ... note that the whole current CA SSL domain name certificates .... is based people worried about the integrity of the domain name infrastructure. The whole SSL domain name certificate house of cards is based on the very same domain name infrastructure .... that supposedly justifies having SSL domain name certificates .... it is just those facts are obfuscated by a whole lot of protocol chatter and crypto mumbo, jumbo.

1) the original post in this thread was about self-signed certificates

2) with regard to server authentication, if you can't rely on the sever to correctly tell the client who the server is, then the client surely can't rely on the server's self-signed certificate to tell the client who the server is.

3) a big part of the financial justification for paying money for trusted SSL domain name certificates is some confidence that the client is talking to the correct server.

4) a CA, in order to prove that the entity requesting an SSL domain name certificate actually owns the domain name in question has to contact the authoritative agency for domain names, the domain name infrastructure.

5) if the domain name infrastructure has significant integrity issues why would a CA bother to verify that the entity requesting a specific domain name actually owns that domain name.

6) the CA industry can either

a) stop bothering to contact the domain name infrastructure .... just immediately issue the SSL domain name certificate

b) try and improve the integrity of the domain name infrastructure (aka like suggestion for adding public keys)


so i somewhat interpret your comments as we should only consider changing what is currently being done .... so long as it doesn't impact the current protocol chatter. any discussion about self-signed certificates is perfectly valid .... because the SSL protocol chatter wouldn't have to change .... ignore the fact that it now raises some question as to the validity of the SSL server authentication function.

I just claim that the existing SSL domain name certificates actually have similar issues with regard to the validity of the SSL server authentication function ... because the CA certification function for producing SSL domain name certificates is dependent on asking the domain name infrastructure. There may be exactly the same integrity issues with regard to data in a self-signed certificate and a CA-issued certificate .... it is just that the CA-issued certificate has a whole lot of business processes between the certificate creation and the actual point of the integrity problems (tending to obfuscate the issues).

The ancillary claim is that some in the CA industry actually understand this .... therefor the proposal that public keys be registered with the domain name infrastructure at the same time that a domain name is registered.

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Mon, 17 Mar 2003 07:41:01 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 10:30 PM 3/16/2003 -0800, Eric Rescorla wrote:
The problem is that you're acting as if the "domain name infrastructure" is a unitary thing. It's not. The problem is that the DNS is untrustworthy, not that the information on which it's based is untrustworthy. However, Verisign, at least, has access to the original data and doesn't have to worry about DNS tampering, so they are perfectly capable of issuing certificates based on valid data.

-Ekr

--
[Eric Rescorla ekr@xxxxxxxx]
http://www.rtfm.com/


the exploit/vulnerability of the DNS database, that is of concern to the CA-industry, is a exploit/vulnerability of the DNS database. Once the database entry has been compromised .... it makes little difference whether there is direct access to the database entry .... or there some other kind of transaction against the database. I assert that once the exploit has happened, the results seen by Verisign and any other CA vendor .... referencing the same domain entry .... will be the same .... independent of whether Verisign has access to the original data or not.

the countermeasure (that i've repeatedly mentioned ... that is somewhat motivated by the CA-industry .... regardless of whether there is direct access to the original data or not) of registration of a public key in the database entry, at the same time the domain name database entry is created, is to address the exploit/vulnerability against the database entry.

the point is that certificates are basically issued against the dns database entry ... it turns out in database usage .... once the database entry has been compromised .... and you have an infrastructure that builds something based on the database entry .... then it matters little how the database entry is accessed.

So DNS database entries are built by somebody, someplace contacting some company that performs registration of domain names. There are lots of those companies and they tend to be responsible for the database entry contents for the domain names that they register. Again this particular exploit/vulnerability isn't against the accessing of the database entry .... it is against the updating of the database entry .... and then later uses that are dependent on values in the database entry.

From the nature of the countermeasure involving registration of a public key in the database entry, somebody familiar with public key infrastructures might conjecture that the purpose of the public key registration is to authenticate subsequent updates to the database entry.

slightly related
https://www.garlic.com/~lynn/aadsm13.htm#8 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#9 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#21 A challenge
https://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Mon, 17 Mar 2003 08:59:44 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 10:30 PM 3/16/2003 -0800, Eric Rescorla wrote:
The problem is that you're acting as if the "domain name infrastructure" is a unitary thing. It's not. The problem is that the DNS is untrustworthy, not that the information on which it's based is untrustworthy. However, Verisign, at least, has access to the original data and doesn't have to worry about DNS tampering, so they are perfectly capable of issuing certificates based on valid data.

-Ekr

--
[Eric Rescorla ekr@xxxxxxxx]
http://www.rtfm.com/


actually ... as i mentioned previously ... most of this is pretty much beside the point. even tho when we were doing the original e-commerce thing .... we had to go perform these due diligence visits with the major CAs .... with some depth review of the step-by-step process ... that isn't the major thrust of the original reply. We also knew a little about database operation ... as did the people that had come from a database vendor and were at this small client/server start-up building the commerce server.

in any case, from the original:
How effective is the SSL cert regime?

Last page showed 9,032,963 servers. This
page shows 112,153 servers using certs.


....
One of the points was the possibility of self-signed certs. As an aside, in the original post I raised the issue that more pervasive use of SSL .... could be based on a number of different things .... included things possibly not directly related to SSL-certs .... like the performance impact that performing basic SSL operation has on a server.

So I made the claim that SSL provides
1) server authentication (& countermeasure for MITM exploits)
2) encrypted channel
3) replay challenge


I also asserted that a major motivation for financial transactions related to SSL domain name certificates were related to server authentication. That if there was any significant change with regard to justification for SSL domain name certificates ... that it could have a significant downside effect on the SSL-cert regime.

So a purpose of the SSL-cert is basically to provide trusted public key distribution (bound to a domain name) which is used in both #1 and #2. Now, one of the original suggestions was the possibility of using self-signed certificates. Now if there is some issue of trusting that you are really talking to the server you think you are talking to (requiring server authentication), then self-signed certificates don't really help a lot.

An assertion is also that any improvements in various internet infrastructures (including domain name infrastructure) ... could lower the concern regarding am I talking to a counterfeit/wrong server, which results in lowering the number of business operations that feel that they might need an SSL-cert for standard business operation or feel that the value of an SSL-cert is less than the going price for such things. For instance, the primary reason that SSL-cert regime is as pervasive as it is .... is associated with credit card transactions (aka this thing we worked on commonly referred to as e-commerce) Now SSL protects credit card numbers while in flight. However, we never actually saw a reported exploit against credit card numbers in flight. All the reported instances of major credit card exploits have to do with harvesting of credit card merchant files ... at rest at the merchant. So for the major exploit, SSL has no effect on.
https://www.garlic.com/~lynn/subintegrity.html#fraud

So there are integrity issues in the existing domain name infrastructure. However, as previously pointed out, there are integrity issues in the existing SSL-cert because the process is also dependent on domain name infrastructure. A major road block for using the domain name infrastructure as trusted distribution of public key bound to the domain name .... is the availability of the public key in the domain name infrastructure database. However, that barrier is addressed by a CA-industry proposal to have public keys registered in domain name entries.

So there are additional concerns about various other kinds of vulnerabilities and exploits of the domain name infrastructure, specifically in the process of propagation and distribution of the information (like DNS cache poisoning). So, as per previous thread focusing specifically on the transition to domain name infrastructure with higher integrity operations ... the public key that gets distributed by DNS can initially reside in encapsulated form in the database entry ... effectively as a mini-cert .... public key, domain name, digital signature. In the interim, to more trusted domain name infrastructure .... rather than distributing bare public keys .... distribute encapsulated public keys.

some past discussion of baby steps for the domain name infrastructure that rather than distribution of bare public keys as part of the dns protocol, the public keys are encapsulated in a mini-cert and digitally signed.
https://www.garlic.com/~lynn/aepay10.htm#81 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#82 SSL certs & baby steps (addenda)
https://www.garlic.com/~lynn/aepay10.htm#83 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#84 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aadsm12.htm#58 Time to ID Identity-Theft Solutions

also, mentioned in the above ... the existing SSL-cert environment isn't even a real PKI ... because it doesn't do anything about the management and administration of the certificates .... like revocation. when we originally were performing the due diligence on the CAs as part of the original e-commerce effort, we coined the term certificate manufacturing to distinguish SSL-cert environment from a real PKI. The domain name infrastructure does provide for timely management of its distributed information, something that the existing SSL-cert environment doesn't address.

as a side issue, X9.59 was specifically designed to address exploits of payment account numbers .... whether at rest (as in database entry) or in flight (as in flowing over the internet) for all kinds of electronic payments (credit, debit, e-check, stored-value) and all kinds of environment (face-to-face, non-face-to-face, point-of-sale, internet, unattended terminals, non-internet, etc).
https://www.garlic.com/~lynn/x959.html#x959

there is the possibility, with something like x9.59 that addresses all the exploits of payment account numbers, there would be an erosion in the demand for the existing SSL-certs, possibly putting at risk the financial stability of the existing independent SSL-cert operations. I assert that integrating trusted public key distribution into an existing information distribution infrastructure places it on a better financial basis.

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto? (bad form)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Mon, 17 Mar 2003 09:48:31 -0700
To: EKR <ekr@xxxxxxxx>
Subject: Re: How effective is open source crypto? (bad form)
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
        Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
At 07:42 AM 3/17/2003 -0800, Eric Rescorla wrote:
But it's not he one that's of concern on the normal Internet. On the normal Internet, there are a wide array of easy to mount DNS spoofing attacks that don't involve compromising the true server, just forging traffic.

-Ekr


also as per
https://www.garlic.com/~lynn/2003.html#49 InfiniBand Group Sharply, Evenly Divided

and related things:
https://www.garlic.com/~lynn/subintegrity.html#assurance
https://www.garlic.com/~lynn/subintegrity.html#fraud

that the whole internet is at risk because of various domain name infrastructure integrity issues ... and there is need to improve the integrity of the domain name infrastructure .... regardless of the CA-industry issues and/or using it for trusted distribution of things other than ip-address bound to domain name.

as before, baby steps
https://www.garlic.com/~lynn/aepay10.htm#81 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#82 SSL certs & baby steps (addenda)
https://www.garlic.com/~lynn/aepay10.htm#83 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#84 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aadsm12.htm#58 Time to ID Identity-Theft Solutions

suggests that in the interim, encapsulated public key distribution can be used instead of plain, bare public key distribution. With sufficient improvement in the integrity of the domain name infrastructure with respect to trusted information distribution it might be possible to transition from mini-cert distribution to bare public key distribution.

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

How effective is open source crypto?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Date: Mon, 17 Mar 2003 17:06:49 -0700
To: Bill Stewart <bill.stewart@xxxxxxxx>
Subject: Re: How effective is open source crypto?
Cc: Anne & Lynn Wheeler <lynn@xxxxxxxx>,
   Ian Grigg <iang@xxxxxxxx>, cryptography@xxxxxxxx
Amutlt 09:58 AM 3/17/2003 -0800, Bill Stewart wrote:
The second half isn't true, because it assumes a 1:1 mapping between domain names and IP addresses, and it's really N:N, with N:1 and 1:N both common. Larger domains typically have multiple servers (and therefore multiple addresses), though you may be able to represent this in the reverse name space. Smaller domains are usually served by virtual hosts on big shared servers, so multiple domains have the same IP address, or even multiple pooled addresses. That not only means there's a security concern about domains on the same host, but it also makes the reverse namespace not very useful. You could have the reverse namespace belong to the hosting server, and have them issue certs to their customers, but it's still certs. Also, small domains often want to change servers, but don't want their certification hosed if the reverse lookups don't get changed at the same time as the forward.


... the second half only assumes that there are bindings to a domain name .... which is the same thing that happens in a SSL domain name certificate .... a binding between a domain name and a public key.

now dns allows the binding of one or more ip-addresses to the same domain name. SSL domain name certs sort of have the reverse .... they allow "wildcards" in domain name certs (aka browsers will check a SSL domain name cert with a domain name containing a wildcard specifier as being a fuzzy match against the URL that the browser used). In these scenarios .... multiple different servers have access to the same private key.

effectively what dns would be doing is transporting the information analog of a real-time cert (representing a real-time binding between the resolved host name and a public key) .... i.e. the binding of some domain name to a public key. in the stale, static cert business .... they need the wildcard because at the time in the distant past when the cert was created the, infrastructure might not be totally sure about all possible names that might be used. So the browser asks DNS to tell me what you know about a specific host/domain name. DNS sends back one or more ip-addresses (a-records), but if play with dig or nslookup .... there can be other information bound with the domain name. So multiple different domain names may have similar/identical fields in the record .... like the person to contact. There is nothing preventing different domain name entries from having same public key values (as appropriate).

So the browser check is given the domain name (in the URL) and some public key bound to that domain name .... if the browser contacts any of the ip-addresses provided using the public key encryption .... and the server successfully responds, then obviously it must be the correct server since otherwise it wouldn't have the correct private key. You just have to make sure that whenever anybody asks about a domain name to contact ... that the public key that will be used by that specific server matches what is in the database entry.

Because the existing operation only has a single public key per domain name certificate (even if there are wildcards in the domain name and large number of servers with the same public key) or a single server has a large number of different certificates .... possibly with the same public key ... it does simplify the mapping to DNS. The SSL-cert mapping to DNS doesn't need multiple public keys per domain name .... because the SSL-cert doesn't use multiple public key mapping to domain name.

It would get trickier if SSL-cert mapping have multiple identical certificates that differed only in the public key field .... and there were multiple different servers for the same domain name ... each with their own unique certificate and unique key-pair (and each server reliably presents the correct SSL-cert that corresponds to the public key that it "owns"). It also gets tricky for any sort of SSL-cert caching .... since there are things that attempt to do server load balancing via A-record rotation ... but there is also router-based load-balancing that effectively does NAT ... with dynamic steering of new connection to server with fewest connections (which means that you can't guarantee you are talking to the same server given the same identical ip-address). In any case, all of the multi-machine server cases that I know of, use a common key-pair across the server farm (and the same public key may or may not appear in different certificates .... with virtual domain hosting). I would be interested to know if there are server farms with multiple different SSL-certs with identical domain name and different public keys (with each server having its own unique key-pair) ... as multiple servers all using the same key-pair.

Now back to the SSL-cert scenario .... the browser must first contact the server, get back the server's cert, check to see if the server's cert contains anything that (closely) matches the domain name in the original URL, and then eventually attempt to communicate with the server using the public key from the SSL-cert (which is then somewhat similar to the above abbreviated scenario). Now if this was real PKI ... instead of just simple certificate manufacturing .... the browser would also have to check any trust chain for CRLs related to the server's certificate. All that gorpy stuff goes away .... if you are using a real time distribution mechanism from a trusted source .... rather than a stale, static distribution mechanism from an unknown source (aka the webserver).

now when we were doing this thing with the small client/server startup for their commerce server and what would become to be called e-commerce .... we did have this small problem with getting them to support multiple a-record stuff. Since we effectively had sign-off on the server to payment gateway .... so we could mandate multiple a-record support (along with mutual SLL authentication, and some number of other integrity things) ... however, even given classes on multiple a-record support along with example code .... it was probably another year before the client code supported it.

work with small client/server startup on this thing called SSL and e-commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

some discussion about the multiple a-record trials and tribulations:
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2003.html#30 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003c.html#8 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#12 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#24 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#25 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The case against directories

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler
Date: 03/21/2003 08:02 AM
To: "Anders Rundgren" <anders.rundgren@xxxxxxxx>
cc: ietf-pkix@xxxxxxxx, epay@xxxxxxxx
Subject: Re: The case against directories
>2. Internal information (including employment) is generally not public

in fact, for earlier public PKI certificate-based information, the eventual result (because of difficult privacy issues, both individual and institutional, taking some liberty to translate privacy as both a personal concept as well as an institutional concept) was the lowest common denominator, a certificate that only contained a institutional number (ex: various financial certificates) and nothing else, which then were only used in context of relying-party-only certificates.

the cross domain issues have not been technical (they effectively have all been a form of privacy, either individual or institutional) .... they were not technical some 15 years ago with cross-domain kerberos, there were not technical some 6-8 years ago with cross-domain PKI certificates, and they probably won't be technical this time around with directories. another simple example of the reduction of information has been the dig/nslookup responses from the biggest internet "public" directories, the DNS root-servers. Over the last five years, nearly all "privacy" information has disappeared from public responses (like name, address, telephone number, and email of administrative and technical contacts).

DNS directories are an example of an online internet information source for a couple decades and the issue isn't that the paradigm doesn't work, but there are significant privacy issues (personal & institutional).

And as one of my common/frequent references .... certificates are just stale, static copy/representation of a database/directory entry that exists somewhere ... and the personal & instititional privacy issues are common, regardless of the representational format.

--
Internet trivia, 20th anv: https://www.garlic.com/~lynn/rfcietff.htm

on 3/21/2003 1:21 am wrote:
I would like to add a few things to what Phillip Hallam-Baker of VeriSign wrote about directories as an obstacle to PKI deployment.

Many PKI experts are involved in huge public-sector-driven projects, that are based on establishing directory interoperability between organizations. At first sight this looks like a great idea but digging a bit further, you soon note that this is not a universal solution but rather a dead end.

Directory problem issues
1. Technical. Unifying schemas + firewall issues
2. Internal information (including employment) is generally not public
3. The level of openness depends on who is asking
4. Directories represent just one way to organize data

But, there is no reason to despair, as there are work-arounds that properly addresses all these issues:

Using authentication systems like OASIS' SAML, organizations can (through their employees), authenticate to each others' intranets and through this get access to exactly the information they should have and in a format that make sense. The latter may be a directory tree, a PDF-file, a database listing, an HTML form, etc.

Unlike directory systems, SAML allows secure access to any kind of active or passive information source, including purchasing and work-flow systems.

All using the truly universal Internet browser interface.

For machine-to-machine (=automated) access to external information, specialized Web Services seems to be a much more extensible route than directories, as the former introduces no restrictions on data.

Anders Rundgren
Independent consultant PKI and secure e-business



AADS Postings and Posting Index,
next, previous - home