Misc AADS & X9.59 Discussions


AADS Postings and Posting Index,
next, previous - home



ID theft ring proves difficult to stop
Is there any future for smartcards?
Another entry in the internet security hall of shame
Is there any future for smartcards?
Another entry in the internet security hall of shame
Is there any future for smartcards?
Clearing sensitive in-memory data in perl
simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)
simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)
Clearing sensitive in-memory data in perl
Clearing sensitive in-memory data in perl
Payment Tokens
Payment Tokens
[Clips] Contactless payments and the security challenges
Online fraud 'ahead' of credit-card companies-experts
[Clips] Contactless payments and the security challenges
PKI too confusing to prevent phishing, part 28
continuity of identity
'Virtual Card' Offers Online Security Blanket
mixing authentication and identification?
Some thoughts on high-assurance certificates
Some thoughts on high-assurance certificates
Broken SSL domain name trust model
Broken SSL domain name trust model
Broken SSL domain name trust model
Broken SSL domain name trust model
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
NSA posts notice about faster, lighter crypto
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
[Clips] Banks Seek Better Online-Security Tools
browser vendors and CAs agreeing on high-assurance certificates
browser vendors and CAs agreeing on high-assurance certificates
browser vendors and CAs agreeing on high-assurance certificates
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
X.509 / PKI, PGP, and IBE Secure Email Technologies
Phishers now targetting SSL


ID theft ring proves difficult to stop

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: ID theft ring proves difficult to stop
Date: Fri, 09 Sep 2005 00:21:48 -0600
To: cryptography@xxxxxxxx
some older refs:

ID theft ring escapes shutdown
http://news.bbc.co.uk/1/hi/technology/4186972.stm
Windows users fall victim to huge ID theft ring, 50 banks in danger;
Apple Mac users unaffected
http://macdailynews.com/index.php/weblog/comments/6680/
Massive ID theft ring still operating
http://software.silicon.com/security/0,39024655,39151703,00.htm
Grand Theft Identity
http://msnbc.msn.com/id/9108639/site/newsweek/
Servers keep churning in ID theft case
http://www.zdnet.com.au/news/security/soa/Servers_keep_churning_in_ID_theft_case/0,2000061744,39208686,00.htm
Servers keep churning in ID theft case
http://news.com.com/Servers+keep+churning+in+ID+theft+case/2100-7349_3-5842723.html
Servers keep churning in ID theft case
http://news.zdnet.com/2100-1009_22-5842723.html

most recent:

ID theft ring proves difficult to stop
http://www.sptimes.com/2005/09/09/Business/ID_theft_ring_proves_.shtml
Almost a month after Sunbelt Software of Clearwater discovered what it called an international identity theft ring, one Web site collecting the data has been shut down. But the operation is active.

... snip ...

Is there any future for smartcards?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Is there any future for smartcards?
Date: Sat, 10 Sep 2005 16:11:26 -0600
To: pfarrell@xxxxxxxx
CC: cryptography@xxxxxxxx
Pat Farrell wrote:
Nearly ten years ago, when I was at Cybercash, we worked with Mondex and other smartcard vendors who also said "as soon as we have infrastructure"

Something tells me that soon is not gonna happen in what I would call soon. Smartcards (the smart part) were moderately interesting when there was no networking. We've been at ubiquitous networking for many years. L While he was at Cybercash, Ellison was awarded US Patent 6,073,237 "Tamper resistant method and apparatus" which is precisely a network based, software only smartcard.


my characterizations of smartcards from the 80s ... was that they were targeted at the portable computing market segment. however, the technology was only sufficient for the chip ... and there wasn't corresponding portable technology for input and output. as a result you saw things like the work in ISO for standardizing interface to the chip ... so the chipcard could be carried around and interop with fixed input/output stations.

in the early 90s, you saw the advent of PDAs and cellphones with portable input/output technology that sort of took over that market. which would you prefer a portable computing device with lots of application and data where you had to go find a fixed input/output station to utilize the device .... or a similar portable computing device where the input/output was integrated?

later on in the 90s, anne & I were asked to spec, design, & cost the infrastructure for a mondex roll-out in the US ... aka it wasn't the mondex card per-se ... it was all the rest of the infrastructure and dataprocessing required to support a mondex infrastructure (from the mondex international superbrick on down to loading/unloading value on the chip). one of the financial issues with mondex was that most of the float & value was at mondex international with the superbrick; in fact later on you saw mondex international making inducements to various countries where they offered to split the float. this was about the time several of the EU central banks made the statement that the current genre of stored-value smartcards would be given a couple year grace period allowing them to establish an infrastructure ... but after that they would be required to pay interest on unspent value in the card (would have pretty much eliminated the float value at higher levels in the operational stream). that was coupled with the fact that it had a fundamental offline design point ... i.e. the value was held in the chip ... and could be moved between chips w/o having to go online ... becomes something of an anachronism if you have ubiquitous online access (as you've observed).

mondex also sponsored a ietf working group looking at possibly application of mondex transactions in the internet environment. that really represented a difficult undertaking being a shared-secret based infrastructure. the working group somewhat morphed and eventually turned out ECML and some other stuff ... some recent RFCs ..

XML Voucher: Generic Voucher Language
https://www.garlic.com/~lynn/rfcidx13.htm#4153
Voucher Trading System Application Programming Interface (VTS-API)
https://www.garlic.com/~lynn/rfcidx13.htm#4154
which evolved out of the work on ECML (electronic commerce markup language), which in turned started out with working group somewhat looking at adapting Mondex to Internet transactions. Electronic Commerce Modeling Language (ECML) Version 2 Specification
https://www.garlic.com/~lynn/rfcidx13.htm#4112

some of that chipcard technology can be applied to an electronic something you have authentication technology ... where it is difficult to compromise and/or counterfeit a valid chip.

this raises something of a perception issue ... if you stick with the protable computing device model ... then the chipcard has a bunch of capability that is redundant and/or superfluous for somebody with a cellphone/pda.

If you go with purely the (hard to compromise and counterfeit) something you have authentication model in an online world ... then KISS (or Occam's Razor) would imply that most of the features associated with the earlier smartcard model are redundant and superfluous (and might actually pose unnecessary complexity and points of attack/compromise for something that is purely targeted as something you have authentication).

a couple recent postings somewhat related to threat models and authentication vulnerabilities.
https://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
https://www.garlic.com/~lynn/2005p.html#26 Hi-tech no panacea for ID theft woes

more general discussion of other vulnerabilities and justification for finread terminal standard:
https://www.garlic.com/~lynn/subintegrity.html#finread

Another entry in the internet security hall of shame

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Another entry in the internet security hall of shame....
Date: Sun, 11 Sep 2005 16:49:16 -0600
To: James A. Donald <jamesd@xxxxxxxx>
CC: cryptography@xxxxxxxx
James A. Donald wrote:
For PKI to have all these wonderful benefits, everyone needs his own certificate. But the masses have not come to the party, in part because of the rather Orwellian requirements. Obviously I cannot get a certificate testifying that I am the one true James Donald, because I probably am not. So I have to get a certificate saying I am the one true James Donald SS xxx-xx-xxxx - the number of the beast.

the real issue in the early 90s ... was that the real authoritative agencies weren't certifying one true identity ... and issuing certificates representing such one true identity ... in part because there was some liability issues if somebody depended on the information ... and it turned out to be wrong.

there was talk in the early 90s of independent 3rd party trust organizations coming on the scene which would claim that they checked with the official bodies as to the validity of the information ... and then certify that they had done that checking ... and issue a public key certificate indicating that they had done such checking (they weren't actually certifying the validaty of the information ... they were certifying that they had checked with somebody else regarding the validaty of the information).

the issue of these independent 3rd party trust organizations was that they wanted to make money off of certifying that they had checked with the real organizations as to the validaty of the information ... and the way they were going to make this money was by selling public key digital certificates indicating that they had done such checking. the issue that then came up was what sort of information would be of value to relying parties ... that should be checked on and included in a digital certificate as having been checked. It started to appear that the more personal information that was included ... the more value it would be to relying parties ... not just your name ... but name, ancestry, address, and loads of other characteristics (the type of stuff that relying parties might get if they did a real-time check with credit agency).

one of the characteristics of the public key side of these digital certificates ... was that they could be freely distributed and published all over the world.

by the mid-90s, institutions were starting to realize that such public key digital certificates ... freely published and distributed all over the world with enormous amounts of personal information represented significant privacy and liability issues. you can also consider that if there was such enormous amounts of personal information ... the certificate was no longer being used for just authenticating the person ... but was, in fact, identifying the person (another way of viewing the significant privacy and liability issues). recent post on confusing authentication and identification
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#42 Another entry in the internet security hall of shame

as a result, you started seeing institutions issuing relying-party-only certificates in this time frame
https://www.garlic.com/~lynn/subpubkey.html#rpo

which contained just a public key and some sort of database or account lookup value ... where all the real information of interest to the institution was kept.

the public key technology ... in the form of digital signature verification, would be used to authenticate the entity ... and the account lookup would establish association with all the necessary real-time information of interest to the institution.

this had the beneficial side-effect of reverting public key operations to purely authentication operations ... as opposed to straying into the horrible privacy and liability issues related to constantly identifying the entity.

however, it became trivial to prove that relying-party-only certificates are redundant and superfluos ... with all the real-time information of interest for the institution on file (including the public key) ... and the entity digitally signing some sort of transaction which already included the database/account lookup value ... there was no useful additional information represented by the relying-party-only certificate ... that the relying party didn't already have (by definition, the public key was registered with the relying party as prelude to issuing any digital certificate ... but if the public key had to already be registered, then the issuing of the digital certificate became redundant and superfluous).

this was also in the era where the EU data privacy directive was pushing that names be removed from various payment card instruments doing online electronic transactions. If the payment card is purely a something you have piece of authentication ... then it should be possible to perform a transactions w/o also requiring identification.

as to the 2nd part ... passwords are a shared-secret, based, intrenched institutional-centric technology. it requires lot less technology infrastructure to support a shared-secret password based operation. this was ok back in the mar, 1970 ... when i got my first permanent home terminal with userid/password login to the office computer ... and i only had a single pin/password. however, as the decades passed ... the number of shared-secret password/pin based environments proliferated to the point where i now have to deal with scores of different values ... all of which i'm suppose to theoretically have memorized, each one of them being unique from the others ... and potentially have to be changed montly.

Is there any future for smartcards?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Is there any future for smartcards?
Date: Mon, 12 Sep 2005 08:07:33 -0600
To: Jaap-Henk Hoepman <jhh@xxxxxxxx>
CC: cryptography@xxxxxxxx
Jaap-Henk Hoepman wrote:
I believe smartcards (and trusted computing platforms too, btw) aim to solve the following problem:

"How to enforce your own security policy in a hostile environment, not under your own physical control?"

Examples:
- Smartcard: electronic purse: you cannot increase the amount on your e-purse (unless reloading at the bank).
- Trusted computing: DRM: my content cannot be illegally copied on your machine.

As soon as the environment is under your won physical control, software only solutions suffice.


a couple years ago ... i was on an assurance panel in the tcp/tpm track at idf. during my 5 minutes ...
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

i happened to comment that over the previous couple years that tpm had gotten simpler and started to look more and more like aads
https://www.garlic.com/~lynn/x959.html#aads

one of the tpm people was in the front row ... and replied that i didn't have a couple hundred people on a committee helping me design a chip.

I even claimed that the original aads chip design could meet the then tpm requirements with no changes.

some side drift into finread
https://www.garlic.com/~lynn/subintegrity.html#finread

a minor anecdote
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking

one of the things considered in the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959

was the provision to have two digital signatures on a transaction ... one authenticating the originator and one from the signing environment.

two issues with respect to the finread standard has been 1) secure pin-pad and secure entry of pin entry and 2) is what you are signing what you see. finread provides for a hardened external device that attempts to address both of these issues. the issue from a financial institution authenticating and authorizing the transaction for risk management ... is how does the financial institution (or other relying party) really know that a finread terminal was used for a particular transaction (as opposed to any other kind of terminal).

the finread standard specifies the operational characteristics/objectives of the terminal/reader ... but doesn't actually provide assurance to the financial institution (or other relying party) that a certified finread terminal was used for the actual signing environment.

this is sort of out of risk adjusted capital from basel
http://www.bis.org/publ/bcbsca.htm

.... all the possible risks are evaulated for an institution ... and capital assets are put aside proportional to the evaluated risks. approved transactions that have been signed by both the account owner and a certified finread terminal should have lower possible risk than transactions simply signed by the account holder (more unknowns and possible vulnerabilities)

in financial transactions there typically are (at least) two interested parties ... the individual as the account owner ... and the financial institution as the relying party & potentially having significant liability with respect to fraudulent transactions.

software may surfice when things are under your own phsyical control AND nobody else has exposed risk related to operations performed in that environment under. however, when there are other parties at risk, they may ask for a higher level of assurance than simply a statement from the individual that there have been no compromises. some collected postings on assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

and fraud
https://www.garlic.com/~lynn/subintegrity.html#fraud

Another entry in the internet security hall of shame

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Another entry in the internet security hall of shame....
Date: Tue, 13 Sep 2005 10:55:52 -0600
To: Paul Hoffman <paul.hoffman@xxxxxxxx>
CC: pgut001@xxxxxxxx, neuhaus@xxxxxxxx,
cryptography@xxxxxxxx
Paul Hoffman wrote:
In many deployments of "SSL first, then authenticate the user with a password", the "site" consists of two or more machines. Many or most high-traffic secure sites use SSL front-end systems to terminate the SSL connection, then pass the raw HTTP back to one or more web servers inside the network.

The reason I bring this up is that the SSL server generally does not have access to the users' credentials. It could, of course, but in today's environments, it doesn't. Changing to TLS-PSK involves not only changing all the client SSL software and server SSL software, but also the what the SSL server's role in the transaction is.


this is relatively straight-forward on the server side ... most webservers have stub-code for client authentication. frequently you see places writing roll-your-own code for accessing a password flat file (and comparing passwords for authentication).

another approach is to have the webserver client authentication stub-code call kerberos or radius interface
https://www.garlic.com/~lynn/subpubkey.html#kerberos
https://www.garlic.com/~lynn/subpubkey.html#radius

where the clients credentials are managed and administrated ... including authentication, authorizations and also potentially accounting information.

the original pk-init draft for kerberos had public keys registered in lieu of passwords ... and kerberos doing digital signature verification with the on-file public key. similar implementations have existed for radius.
https://www.garlic.com/~lynn/subpubkey.html#certless

basically use the extensive real-time administrative and operational support for integrated authentication, authorization and even accounting across the whole infrastructure. ISPs and/or corporations that currently use something like radius for their boundary authentication in places like dial-in routers ... could use the same exact administrative and operational facilities for providing client authentication, authorization and accounting for any webhosted services they might provide (aka ... the integrated administrative and operational support could include very dynamic and fine-grain authorization information ... like which set of servers during what portions of the day).

the other advantage of the integrated real-time business, administrative, and operational of a radius type approach is that it can select the authentication technology used on a client-by-client basis ... it doesn't have to be a total system-wide conversion. the radius/kerberos solution could be rolled out on all the servers ... and then technology selectively rolled on a client-by-client basis ... continueing to use the same exact integrated business, admnistrative, and management real-time characteristics with large co-existance of different client technologies (for instance ... when clients setup their dial-in PPP connection to their server ... they may be offered a number of different authentication options ... a server-side radius operation can concurrently support all possible authentication technologies ... appropriantly specified technology on a client by client basis.

kerberos and radius are extensively used for doing real-time integrated administrative and management of authentication, authorization and even accounting information. registering public keys in lieu of passwords is a straight-forward technology operation upgraded ... preserving the existing business, management, and administrative real-time integrated characteristics.

it wouldn't introduce new business, management, and/or administrative operations ... like frequently occurs with pki-based operations.

with the use of the appropriate business, management, and administrative real-time infrastructure ... straight-forward new technology roll-outs addressing various authentication, authorization, and/or accounting requirements doesn't have to be a synchronized, serialized, system-wide change-out ... all the servers could be upgraded to a real-time business, management, and administrative infrastructure that is relatively technology agnostic as to the specific technology used by any specific client.

then the specific technology used by any client then becomes an individual client decision coupled with possible infrastructure overall risk management requirements for that specific client when doing specific operations.

one could imagine a wide-variety of clients ... all accessing the same identical infrastructure ... possibly concurrently using password, challenge/response, digital signature with and w/o hardware token protected private keys.

specific authorization features might only be made available when the digital signature is believe to have originated from a private key that has been certified to exist in a hardware token with certified integrity charactistics (keys generated on the token, private key never leaves the token, token evaluated at EAL5, etc). Certain fine-grain entitlement permissions might conceivably even require options like authentication operation is digitally co-signed by a known terminal ... somewhat the finread side-subject which also has known, certified integrity characteristics ... possibly even including fixed physical location. recent finread related posting:
https://www.garlic.com/~lynn/aadsm21.htm#1 Is there any future for smartcards

note such an integrated real-time operation can leverage the same business, administrative and management infrastructure for not only deploying fine-grain session-based operations ... but also fine-grain transaction operations (supporting real-time distribution of client credential authentication, authorization, and accounting information across the operational environment).

in the past, i've periodically commented that when you have been freed from having to worry about all the extraneous and distracting vagueries of PKI-related issues ... you can start concentrating on the fundemental requirements that a large, complex institution might have for authentication, authorization, and accounting .... including being able to support multiple different concurrent technologies meeting a broad range of risk management and integrity requirements.

Given a suffiently robust real-time administrative and management infrasturcutre ... then specific technology roll-outs should be much less tramatic since they should be do'able piece meal on a server by server and client by client basis ... w/o resorting to a whole scale infrastructure synchronized conversion (while still retaining integrated, real-time overall administration of the operation during any such conversion).

Is there any future for smartcards?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Is there any future for smartcards?
Date: Tue, 13 Sep 2005 12:07:59 -0600
To: Dave Howe <DaveHowe@xxxxxxxx>
CC: <cryptography@xxxxxxxx>
Dave Howe wrote:
TBH I don't think the smartcard approach will work - really, everything needed to verify what you are signing or encrypting needs to be within your secure boundary, so the only sensible approach is for a mobile-sized cryptographic device to be autonomous, but accept *dumb* storage cards for reading and writing; that dumb card can then be used to transfer a unsigned document to the cryptographic device, which when inserted uses a relay or switch to assume control of the keyboard and screen; person wishing a digital signature stores the document to be signed onto the card; signer inserts into his device, uses the device's display to assure himself this is really what he wants to sign and then keys his access code. The device then produces a digital signature certificate (possibly deliberately adding some harmless salt value to the end before signing, which is noted in the detached certificate's details) and copies that to the dumb card, retaining a copy for the user's own records. by using a switch controlled by the cryptographic module, the display can be then used by an alternate system when not in use - for example, a mobile phone - while providing an airgap between the secure module and the insecure (and yes, this would mean if you received a contract via email, you would have to write it to a card, remove that card from a slot, insert it into a different slot, then check it. I can't see how the system can be expected to work otherwise....)

part of the issue may involve semantic confusing digital signature and human signature (possibly because they both contain the word signature)

from 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor ... fundamentally a digital signature verification by public key is basically a form of something you have authentication (aka the private key contained uniquely in a hardware token).

so, from a parameterised risk management and threat model standpoint ... the issue is how many ways ... and how probable is the compromise of the physical object ... such that the digital signature doesn't originate from a specific hardware token in the possesion of a specific person.

the other stuff ... say related to issues attempting to be address by some of the finread characteristics
https://www.garlic.com/~lynn/subintegrity.html#finread

where a digital signature may be used in conjunction with other efforts and technology to imply a human signature ... which in turn implies that the person had read, understood, agrees, approves, and/or authorizes with what is being signed. this goes far beyond the straight-forward something you have authentication that is implied by the verification of a digital signature with a public key.

it also potentially opens up the dual-use attack ... where the same technology is used for the original straight-forward authentication purpose ... and as part of some sort of infrastructure implying read, understood, agrees, approves, and/or authorizes.

the pki digital certificate work somewhat originally strayed into this confusing the term digital signature and human signature (possibly because they both contain the word signature) ... with the original definition of the non-repudiation bit in a digital certificate. The scenario went that if the relying party could find and produce a digital certificate w/o the non-repudiation bit set, then the relying party could claim that a digital signature applied to some bits were purely for authentication purposes.

However, if the relying party could find and produce a digital certificate (for the public key) with the non-repudiation bit set, then the relying party claimed that was sufficient proof that the person had read, understood, agrees, approves, and/or authorizes the bits that had the digital signature (in part, because there is nothing in normal PKI standards that provides proof as to what, if any, digital certificate happened to be attached to any particular digital signature).

Then came the realization that it was quite absurd that because a certification authority had included the non-repudiation bit in some digital certificate at some point way in the past ... that the setting of that bit had absolute and total control of whether a person had read, understood, agrees, approves, and/or authorizes some pattern of bits for every digital signature that might be created in the future. The absurdity of such an assertion since lead to the non-repudiation bit being quite depreciated.

in any case, the morphing of any digital signature for something you have authentication into anything that could imply human signature goes well beyond the secure boundary issues.

misc. collected posts on human & "e" signatures
https://www.garlic.com/~lynn/subpubkey.html#signature

some past posts on dual-use attack
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack

Clearing sensitive in-memory data in perl

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Clearing sensitive in-memory data in perl
Date: Tue, 13 Sep 2005 14:18:00 -0600
To: demonfighter@xxxxxxxx
CC: cryptography@xxxxxxxx
Steve Furlong wrote:
Other important questions for programmers are, how good are you? How good does the process allow you to be?

My answers are, I'm quite a good programmer. (Pardon the ego.) I'm careful and methodical and very seldom have buffer overruns or unfreed memory even in my first drafts. For me, my expected code quality in C and C++ is balanced against the black box behaviour of Java's runtime engine. (To be clear: I don't suspect Sun of putting back doors in their engine.) And I'm experienced enough and old enough that I can hold my own in pissing contests with project managers who want to cut corners in order to ship a day earlier.

Implementation quality could be considered in the threat model. I've generally taken the programmers into account when designing a system, but I hadn't explicitly thought of well-meaning-but-incompetent programmers as part of the threat. Guess I should.


note that compared to some other languages .... a lot of C-language buffer overflows can be attributed to C requiring the programmer to keep track and manage various lengths (where in some number of other languages, buffer length characteristics are built into basic object characteristics and operations). i know of at least one production implemented tcp/ip stack done in pascal ... which had no known buffer related problems compared to the possibly hundreds of thousands that have appeared in c-language based implementations.

large collection of past posts on buffer overflow related vulneabilities
https://www.garlic.com/~lynn/subintegrity.html#overflow

part of the issue is that there are hundreds of thousands of applicantations being written ... with possibly only a couple hundred mistake-free, careful programmers available world-wide. a possible solution is to create a time-machine that allows for the limited number of mistake-free, careful programmers to have several thousand hour work days (along with the medical support to allow them to never sleep).

there are separate class of vulnerabilities related to dangling buffer pointers and synchronization problems ... which are common to languages that place the burden of allocation/deallocation burden on the programmer (however, that is a distinct class of vulnerabilities from c-language placing burden of length management on the programmer ... and the resulting mistakes).

some languages like apl ... large collection of past apl posts
https://www.garlic.com/~lynn/subtopic.html#hone

have abstracted both object length characteristics as well as storage allocation/deallocation operations.

simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)
Date: Wed, 14 Sep 2005 11:00:29 -0600
To: herzbea@xxxxxxxx
CC: Paul Hoffman <paul.hoffman@xxxxxxxx>,  pgut001@xxxxxxxx
neuhaus@xxxxxxxx,  cryptography@xxxxxxxx,
   Research on current Internet anti-fraud techniques <anti-fraud@xxxxxxxx>
Amir Herzberg wrote:
Excellent point. From which follows the question: can we improve the security of password-based web login, with less drastic changes - at least in the servers? Or is TLS-PSK the best/only way for us to improve pw-based web login?

I think we can. For simplicity and concreteness, let's say we have the following architecture: regular SSL-supporting client and server, and behind the SSL server we have a web server with the improved pw-login mechanism. We want to support some 'improved pw-login clients', let's say these will be regular browsers with an extension such as (a new version of) TrustBar.

Client and server begin with SSL 'as is'. Now, the web server sends over this SSL-protected page a login page. Browsers without the extension, will handle it as a 'classical' login form. But if the extension exists, it will detect that the server supports improved pw-based login, signalled e.g. with a META tag. The tag may also include number, denoted _Iterations_, which is the request number of hash-function iterations.


by the time all the authentication options are being created ... you might as well move to radius or kerberos as core authentication technology ... if nothing else to manage and administer the myriad of co-existing authentication possibilities.
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

once you have an effective administrtative infrastructure for managing different and co-existing authentication technologies ... there is an opportunity to examine the level of technology required for some of the password-based technologies vis-a-vis a straight forward certificate-less public key (where public key is registered in lieu of password) and digital signature verification
https://www.garlic.com/~lynn/subpubkey.html#certless

so the requirements are somewhat ... resistant to evesdropping on the interchange and related replay-based attacks; possibility of using the same password with multiple servers ... w/o cross security domain compromises; availability of the software to perform the operations.

so if the assuption is that SSL is part of the basic infrastructure ... that should satisfy whether there is availability of public key software available at both the client and the server (even tho it may tend to be entangled with digital certificate processing).

from the client prospect, i would contend that having the client keep track of server-specific details as to things like login iterations or other details ... creates client operational management complexity ... but at least requires some form of container. such a container technology could possibly be as easily applied to keeping track of private key.

the meta tag scenario applies equally well to returning a digital signature as one-time-password ... possibly as well as any of the other schemes (especially if you have something like radius on the server side for the administrative management of the variety of co-existing authentication technologies). Note that some flavor of this is used for ISP PPP dial-up authentication (and raidus infrastructures) ... where client may have option of multiple different authentication technologies.

In many ways the administrative management of multiple concurrent, co-existing authentication technologies ... is at least as complex issue as the implementation details of any specific technology (especially if you are considering a large complex infrastructure that is facing operational environment spanning large number of years and dealing with a wide variety of different client and server requirements). One of the advantages of dumping the PKI concept (which is really targeted at first time interaction with strangers) ... and going with single management and administrative infrastructure (using a radius-like model) is that a wide variety of co-existing authentication technologies can be managed in a single administrative environment (w/o requiring the cost and effort duplication of having two or more deployed authentication operations ... aka the real-time production environment and any redundant and superfluous duplicate PKI environment).

simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: simple (&secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)
Date: Wed, 14 Sep 2005 14:14:13 -0600
To: herzbea@xxxxxxxx
CC: Paul Hoffman <paul.hoffman@xxxxxxxx>,  pgut001@xxxxxxxx,
neuhaus@xxxxxxxx,  cryptography@xxxxxxxx,
Research on current Internet anti-fraud techniques <anti-fraud@xxxxxxxx>
ref:
https://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#4 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#7 simple (& secure??) PW-based web login (was Re: Another entry in the internet security hall of shame....)

there is somewhat an ancillary philosphical issue. most of the current password-based systems have been oriented towards a static environment ... contributing to a mindset that addresses authentication technology as a static issue.

The PKI paradigm even goes further with contributing to a somewhat rigid, stale, static view of authenticaiton technology ... spending an enormous amount of effort in focusing on the rigid, stale, static nature of the issued digital certificates.

this can be contrasted with real-time authentication environment provided by RADIUS like technologies ... not only providing for integrated overall management and administration ... but also real-time integrated operation of authentication, authorization, and accounting.

minor confession ... in past life i actually assisted with radius configuration on real, live livingston boxes for a small startup ... when radius was still a purely livingston technology.
https://www.garlic.com/~lynn/subpubkey.html#radius

radius-like technologies provide extremely agile, real-time environment integrating the management, administration, and operation of multiple, co-existing authentication technologies ... along with integrated real-time authorization and accounting.

given that you are freed from the static oriented authentication technologies (like PKI) and related stale, static mindset ... one could even imagine radius-like implementations extended to parameterised risk management; where the infrastructures apply integrity classifications to different authentication technologies and processes ... and authorization infrastructures specifying minimal acceptable authentication integrity levels.
https://www.garlic.com/~lynn/subpubkey.html#certless

some of this is born out of the credit-card industry where real-time authorization can be associated with unique credit limit values on an account-by-account basis ... as well as account specific "open-to-buy" ... aka the difference between the account's outstanding charges and the account's credit limit (aka allows dynamic co-existance of wide-range of different credit limits and dynamic risk management authorization operations)

for instance, a parameterised risk management operation in an agile, real-time, integrated environment might allow for an integrity level with simple something you are digital signature for some permissions ... but other permissions may require that the public key having been registered with a certified hardware token of minimal specified integrity charactiristics and furthermore, the authentication operation has to be co-signed by a finread-like technology certified station.
https://www.garlic.com/~lynn/subintegrity.html#finread

there is a very loose analogy between using the structuring of role-based access control for fine grain permissions ... and the structuring of authentication integrity levels .... for dynamic application for permission purposes.

Part of the problem that stale, static PKI oriented infrastructures have foisted is the focus on the characteristics of the stale, static digital certificate .... as opposed to being free to concentrate on the real-time, dynamic operational characteristics of each, individual authentication event (and not having to by bound by stale, static infrastructure characteristics).

of course, anytime i mention agile, dynamic operation ... i frequently digress to also throwing in boyd and OODA-loop references:
https://www.garlic.com/~lynn/subboyd.html#boyd2
https://www.garlic.com/~lynn/subboyd.html#boyd

and for even further topic drift ... numerous references to having created dynamic, adaptive resource management as an undergraduate in the '60s
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

Clearing sensitive in-memory data in perl

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Clearing sensitive in-memory data in perl
Date: Fri, 16 Sep 2005 16:05:26 -0600
To: Victor Duchovni <Victor.Duchovni@xxxxxxxx>
CC: cryptography@xxxxxxxx
Victor Duchovni wrote:
While some of the fault is perhaps in the core language, my contention is that the real problem is the anemic standard C-library. When working on C projects that have (and uniformly use) their own mature string handling libraries (I was a contributor to Tcl in the 90's and am now working in Postfix) I found that buffer overflows (and with Tcl for reasons I won't go into here also memory leaks) were a non-issue in those systems.

With either Tcl_DString or VSTRING (Postfix), one simply loses the habit of needing to keep track of buffer lengths. When combined with a compatible I/O interface (say VSTREAM), argument vector library (ARGV) hash table library, ... one no longer in practice manipulates bare null-terminated string buffers except when passing (usually read-only) content to system calls via the C-library.

I continue to write code in C, free of buffer overflows and memory leaks, not because I am more manly than the next programmer, but because I am attracted to working on well-designed systems, whose designers took the time to develop a richer set of idioms in which to construct their work.

My view is that C is fine, but it needs a real library and programmers who learn C need to learn to use the real library, with the bare-metal C-library used only by library developers to bootstrap new safe primitives.


I've frequently observed in the past that some assembler language environments have also had very pervasive use of explicit lengths associated with most operations, system functions, and library routines resulting in very low frequency of buffer overflows ... some amount of collected past posts on the subject ... including references to the 30 years later article (when it first came out)
https://www.garlic.com/~lynn/subintegrity.html#overflow

minor connection .... the 30 years later article is about multics which was done on the 5th floor of 545 tech sq ... and some of the assembler stuff that i'm familiar with was done on the 4th floor (slight disclaimer i was on the 4th flr for some amount of the period)
https://www.garlic.com/~lynn/subtopic.html#545tech

some of the early stuff done on the 4th floor ... also was adapted to some number of commercial time-sharing services which had some fairly stringent integrity requirements
https://www.garlic.com/~lynn/submain.html#timeshare

Clearing sensitive in-memory data in perl

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Clearing sensitive in-memory data in perl
Date: Sat, 17 Sep 2005 14:24:09 -0600
To: Ben Laurie <ben@xxxxxxxx>
CC: Adam Shostack <adam@xxxxxxxx>, cryptography@xxxxxxxx
Ben Laurie wrote:
gets is so not the problem. Using strings that _can_ overflow is the problem. That means wrapping the entire standard library.

And, of course, the issue is that every other library in the universe uses C-style strings (etc.), so unless we can all agree on a better paradigm, we're screwed.


note that various infrastructures ... have made the length field an integral characteristic of nearly every piece of storage as well as the automatic use the length values an integral piece of every operation; extended from the lowest level interrupt handlers and device drivers ... up thru the most complex application code. it isn't just a characteristic of the high level application libraries ... but nearly every aspect of the operation environment.

these length-based paradigms aren't only limited to programming languages (other than C) ... but can also be found in infrastructures that are pure assembler code.

the other way of viewing the common c-based string issue is that the length isn't explicit ... the default c-based length paradigm is implicit based on the pattern of data in a buffer ... which tends to create length related vulnerabilities from simple data manipulation operations (you might generalize this as a dual-use vulnerability). this is further compounded when there is a field/buffer that is empty and doesn't yet have any data to establish an implicit data-pattern defined length.

explicitly separating the length attribute from any data pattern characteristic would make the infrastructure less vulnerable to length mistakes resulting from simple data manipulation operations. it would also enable a single, consistant length paradigm for both field/buffers that contain data patterns and field/buffers that are empty and don't contain data patterns.

Payment Tokens

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: September 18, 2005 02:25 PM
Subject: Payment Tokens
MailingList: Financial Cryptography
ref:
https://www.financialcryptography.com/mt/archives/000552.html Payment Tokens

Note that low cost hardware tokens don't necessarily have to be low value.

hardware token costs are basically packaging, processing, and the chip.

A chip may have big upfont costs associated with design ... which has to be amortized across the production run. given large enuf production runs, this can approach cents (what the epc effort is looking for with rfid chips).

the actual per chip costs are primarily related to the number of good chips that you get out of a wafer ... and are pretty much unrelated to the kind of chip. complexity of the chip affects yields and therefor the number of the chips per wafer. complexity can also affect size of chip ... which also affects number of chips per wafer. the move from 8in to 12in wafers increase the number of chips per wafer. simple, small chips cut both the upfront design costs ... but also can significantly increase the chips per wafer .... with the manufacturing costs being essentially a per wafer issue.

i once joked that i was going to take a $500 milspec part, decrease the cost by better than two orders of magnitude and make it more secure .... aka complexity itself can represent a security issue ... less complex can be more secure.

if the hardware token purely represents something you have authentication ... from 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor it is actually possible to do aggresive cost optimization with aggresive simplicity and aggresive reduction in processing ... and still have an extremely high integrity something you have authentication token.

aka low cost ... doesn't necessarily mean low integrity ... it must may simply mean low function. however, if an aggresive attitude is taken regarding what is actually required to prove that you have a uniquely specific token for something you have authentication .... and everything else is extraneous, contributes to increase cost and complexity ... and may actually lower integrity and security ... then it is possible to have you cake and eat it to ... very low cost and very high integrity.

various aads chip strawman posts ... some involving aggresive KISS and simplicity
https://www.garlic.com/~lynn/x959.html#aads

slightly related ... post covering some threat model issues for magstripe token used as something you have authentication
https://www.garlic.com/~lynn/aadsm20.htm#1 Keeping an eye on ATM fraud
https://www.garlic.com/~lynn/aadsm20.htm#22 ID "theft" -- so what
https://www.garlic.com/~lynn/aadsm20.htm#23 Online ID Thieves Exploit Lax ATM Security

Payment Tokens

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: September 18, 2005 02:41 PM
Subject: Payment Tokens
MailingList: Financial Cryptography
ref:
https://www.financialcryptography.com/mt/archives/000552.html Payment Tokens

we've frequently pointed out that high integrity token and x9.59 protocol is privacy agnostic.
https://www.garlic.com/~lynn/subpubkey.html#privacy
https://www.garlic.com/~lynn/x959.html#x959

the issue of a bank mapping an account for identification is typically a government requirement as part of know your customer mandate ... as opposed to an intrinsicly characteristic of the token/protocol.

theoretically you could register for some offshore, anonymous bank account ... and there would be no difference between the x9.59 transaction when used with a bank account that was under know your customer mandates and a bank account that was strictly anonomous.

note also that the magstripe token implementations for stored-value, debit, and credit transactions all ride the same rails ... and it is possible to morph all three kinds of magstripe tokens into the same chip token and the same x9.59 transaction ... where the stored-value transactions can be truely anonymous ... while the debit/credit transactions might have accounts under gov. know your customer mandate.

It isn't strictly a characteristic of the x9.59 protocol or the token ... it is a characteristic of govs. having mandates for know your customer with respect to bank accounts.

one of the characteristics of x9.59 protocol is that it can carry a simple digital signature for authentication; where the verification of the digital signature implies something you have authentication ... aka the person originating the transaction has access to and use of a unique private key, uniquely contained in a specific hardware token/chip.

of course, all bets are off if digital sigantures also require the appending of x.509 identity certificates ... which actually does morph even the simplest of authentication protocols into heavy weight identification protocol

recent posts regarding the confusing authentication and identification
https://www.garlic.com/~lynn/aadsm20.htm#14 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame

Contactless payments and the security challenges

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: [Clips] Contactless payments and the security challenges
Date: Sun, 18 Sep 2005 19:14:57 -0600
To: R.A. Hettinga <rah@xxxxxxxx>
CC: cryptography@xxxxxxxx
related ref:
https://www.garlic.com/~lynn/aadsm21.htm#11 Payment Tokens
https://www.garlic.com/~lynn/aadsm21.htm#12 Payment Tokens

there is an interesting side light involving x.509 identity certificate and the non-repudiation bit ... in the context of point of sale terminals for financial transactions.

fundamentally, PKIs, CAs, and digital certificates have a design point of addressing the opportunity for first time communication between strangers ... when the relying party has no prior information about the communicating stranger and/or has no ability to obtain such information by either online or other mechanisms (the "letters of credit" model from the sailing ship days).

the fundamental characteristic of digital signatures is something you have authentication ... i.e. the validation of the digital signature (with a public key) implies that the originator has access and use of the corresponding private key (the effect can be further enhanced by binding the private key to a unique hardware token).

the appending of x.509 identity digital certificate to digitally signed transactions, tends to turns possibly straight-forward, simple authentication operation unnecessarily into a heavy weight identification operations,

the other characteristic was the confusing digital signatures with human signagures (possibly semantic confusion because both terms contain the word signature). in addition to x.509 identify certificates turning simple authentication operations into identification operations, supposedly if a certification authority had included the non-repudiation bit in the issued x.509 identity certificate ... not only did the operation unncessarily become an identity operation ... the digital signature then became proof that the person had read, understood, agrees, approves, and/or authorizes what had been digitally signed. Eventually there was some realization that just because some certification authority had turned on the non-repudiation bit, it could hardly provide proof and some possibly much later time (after the certification authority had issued the digital certificate), the person was reading, understanding, agreeing, approving and/or authorizing what had been digitally signed.

now an interesting situation comes about with point-of-sale terminals. the current equivalent to human signature at POS terminals is when the terminal displays the amount of the transaction and asks the person to select the yes button ... aka the swiping of the card is an "authentication" operating ... the equivalent of the human signature or approval operation is the pressing of the "yes" button in response to the message (i.e. a specific human operation indicating agreement).

so applying an aads chip card doing x9.59 digital signature at point-of-sale,
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/x959.html#aads

the digital signature becomes something you have authentication ... not the agreement indication. the aads chip strawman had postulated form factor agnostic as well as interface agnostic
https://www.garlic.com/~lynn/aadsm2.htm#straw

from 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor the additional requirement for pin/password (at POS) would make the operation two-factor authentication ... where the pin/password entry (something you know) is nominally a countermeasure to a lost/stolen token.

so it is possible to imagine a POS terminal that delays requesting the entry of pin/password until the amount of the transaction is displayed ... and the terminal requests entry of the pin/password as confirming the transaction.

in this scenario, the result has the interesting aspect of the digital signature representing something you have authentication but the entry of the pin/password not only represents part of two-factor authentication, but in addition, the entry of the pin/password also represents a human operation implying agreement (aka implication of human signature is understanding a message and some human response to the message)

it is rather difficult to turn digital signatures into human signatures ... because human signatures will require implication of human interaction. digital signatures is something generated by some computer process that frequently is totally missing any human effort (also some of the dual-use attacks). however, the entry of a pin/password involving a human hitting specific sequence of keys in response to a message requesting agreement ... can meet standard implying agreement/response ... especially with terminals that have certified operational characteristics are involved.

misc. collected posts on human & "e" signatures
https://www.garlic.com/~lynn/subpubkey.html#signature

confusing authentication and identification
https://www.garlic.com/~lynn/aadsm20.htm#14 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame

recent post referencing dual-use attack
https://www.garlic.com/~lynn/aadsm21.htm#5 Is there any future for smartcards

Online fraud 'ahead' of credit-card companies-experts

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Online fraud 'ahead' of credit-card companies-experts
Date: Tue, 20 Sep 2005 00:11:56 -0600
To: cryptography@xxxxxxxx

http://news.yahoo.com/s/nm/20050919/wr_nm/financial_creditcard_fraud_dc;_ylt=AlItQtA0cAs1.5FbhmH_orX6VbIF;_ylu=X3oDMTBiMW04NW9mBHNlYwMlJVRPUCUl

Online fraud 'ahead' of credit-card companies-experts

Speaking at an conference here, John Shaughnessy, senior vice president for fraud prevention at Visa USA and Suzanne Lynch, vice president for security and risk services at MasterCard International, said that organized crime rings ....


.. snip ...

The picture they presented of an escalating struggle between commerce and criminality offered little hope of quick relief for consumers worried about identity theft or for investors in card-issuing banks concerned about security's escalating costs.

... snip ...

[Clips] Contactless payments and the security challenges

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: [Clips] Contactless payments and the security challenges
Date: Tue, 20 Sep 2005 15:54:41 -0600
To: Alexander Klimov <alserkli@xxxxxxxx>
CC: cryptography@xxxxxxxx
Alexander Klimov wrote:
Since the phone has an LCD and a keyboard it is possible to display ''Do you want to pay $2 to ABC, Inc. ?'' and authorize the transaction only if the user presses OK (larger transactions may require the PIN). An additional benefit is that it is your own card accepting device and thus the risk that the PIN is keyloggered is lower (of course, this is only as far as mobiles are more secure than usual windows pc).

couple articles to put switch on RFID/contactless payment cards that has to be depressed for the card to be active (somewhat cutting down on some of the covert skimming attacks)

Switching Off Credit Card Fraud
http://www.rfidjournal.com/article/articleview/1843/1/128/

Switching Off May Reduce Contactless Card Fraud
http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=1126873239622215212&block=

PKI too confusing to prevent phishing, part 28

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: PKI too confusing to prevent phishing, part 28
Date: Mon, 26 Sep 2005 09:55:43 -0600
To: Jerrold Leichter <leichter@xxxxxxxx>
CC: Paul Hoffman <paul.hoffman@xxxxxxxx>,  cryptography@xxxxxxxx
Jerrold Leichter wrote:
Talking about users as being able only to hold one bit continues an unfortunate attitude that, if only users weren't so dumb/careless/whatever, we wouldn't have all these security problems. Between the hundreds of CA's that browsers are shipped with - all allegedly trustworthy; the sites whose certs don't match their host names; the random links that appear to be within one site but go off to others with no relationship that anyone can discern to the original; the allegedly-secure sites that don't use https until you log in; all the messages telling you to ignore security warnings; and now the growing number of sites that use self-signed certificates ... as far as I'm concerned, SSL for browsers has gotten to the point where one could legitimately argue that it's *bad* for security, because it leads people to believe they have a secure connection when very often they don't. Perhaps if they realized just how insecure the whole structure really is these days, there would be some pressure - in the form of even more people voting with their feet and refusing to participate - to actually get this right.

there used to be these discussions (arguments?) during the 70s about barriers to entry for getting people to use computers ... that it would be asking to much of them to learn to type. the counter argument was that (it used to be that) computer use was about as costly as a car ... and learning to drive a car was a lot more difficult than learning to type ... and serious computer users tended to spend a lot more time at the keyboard than driving their automobile.

the current scenario was that some amount of people wanted to take some fundamental networking technology and push it into the consumer market and possibly make a lot of money off it. The core networking technology was never designed for a adversary/hostile use environment and much of the direct user interface technology had been designed for a non-network environment. You open up the networking environment to adversary/hostile users and you attach boxes that weren't originally designed for networked environment ... and you flog it in the consumer market place.

It is a little like the early days of the automobiles ... where it they wanted to sell more they had to move from the technically skilled drivers to everybody (but instead of taking several decades to make the transition) it has been approx. ten years. This somewhat gets into my theme about the current state is somewhat analogous to automobile not built with safety glass, bumpers, crush zones, seat belts, etc (no safety features at all) .... they curerntly are all available as after-market items for consumers that want to apply them.

The additional analogy is that the roads don't have any driving conventions, guard rails, generally accepted street signs/light convention, etc.

The other issue is that many operations/transactions can be done with relatively easily obtainable information .... infrastructures in use weren't designed with serious countermeasures to really serious widely occuring threats. Partially, much of the infrastructure is wide-open to effectively replay attacks ... a lot could be done if there were countermeasures to the use of information that is easily harvested/phished.
https://www.garlic.com/~lynn/subintegrity.html#harvest

Telling people not to give out such information is somewhat analogous to telling people that they shouldn't drive off the side of the cliff at night ... it eventually becomes easier to put up guard rails with highly reflective coatings, as well as requiring standardized bumpers and headlights (give up on the futile attempts to keep the information confidential ... change the paradigm so that the information is no longer useful for fraudulent transactions).

On a more micro-level ... the flogging of PKIs for straight-forward authentication purposes has contributed to the problem. I've claimed that PKI design point is the "letters of credit" (or introduction) from the sailing ship days ... initial introduction between complete strangers (aka heavy-weight identification). So it possibly can also be considered the carpenter with a hammer ... and therefor everything starts looking like a nail.

continuity of identity

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: continuity of identity
Date: Wed, 28 Sep 2005 09:01:37 -0600
To: Trevor Perrin <trevp@xxxxxxxx>
CC: John Denker <jsd@xxxxxxxx>, cryptography@xxxxxxxx
Trevor Perrin wrote:
One pragmatic issue is that it would be nice if you could form "continuity of identity" bindings to existing 3rd-party-managed identities as well as self-managed identities. If the client records an identity as something like (CA cert, domain name), then this identity would remain stable across end-entity key and cert changes regardless of whether the CA cert is self-managed or belongs to Verisign. Tyler Close's Petname Toolbar [4] is an excellent implementation of this concept.

note this verges on my theme of confusing authentication and identificaton. one of my examples is the opening of an off-shore anonymous bank account and providing some material for use in authentication ... say tearing a dollar bill in half and leaving one-half on file ... to be matched with the other half in the future.

registration of public key can provide continuity of authentication ... that the current entity is the same as the original entity ... and any issue of identity is orthogonal ... aka the registration of public key for authentication of continuity is orthogonal to the issue of whether there is any associated identification information.

this is somewhat one of the holes that x.509 identity digital certificates dug for themselves ... effectively starting out with the premise that the most trivial of authentication operations were mandated to be turned into heavy weight identification operation.

of course it is possibly one of those established nomenclature convention things ... that the popular convention now has references to identity and identification so terribly confused with even trivial authentication operations ... that it may be impossible to unwind the damage done.

'Virtual Card' Offers Online Security Blanket

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: 'Virtual Card' Offers Online Security Blanket
Date: Sat, 01 Oct 2005 09:29:06 -0600
To: cryptography@xxxxxxxx

http://www.washingtonpost.com/wp-dyn/content/article/2005/09/30/AR2005093001679.html
Offered to holders of Citi, Discover and MBNA cards, these "virtual credit cards," or single-use card numbers, are designed to give some peace of mind to consumers concerned about credit card fraud.

... snip ...

when we were doing x9.59 ... some observed that during any transition period, groups of people would require two account numbers and claimed there weren't enough available numbers in the account number space to support that. the issue is that x9.59 has business requirement that account numbers used in x9.59 transactions can only be used in strongly authentication transactions ... and can't be used in other kinds of transactions. x9.59 account numbers obtained through skimming, phishing, data breaches, etc ... then can't be turned around and used in ordinary transactions that aren't strongly authenticated
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

in any case, single-use account numbers also address the issue of re-use of account numbers from skimming and data breaches (i.e. places that they might normally be obtained because of wide-spread requirement for access to account numbers by normal business practices). they are less effective in phishing attacks, possibly involving, as yet, unused account numbers. in any case, single-use account numbers could be considered a much more profligate use of account number space ... than x9.59.

recent post somewhat related to security proportional to risk
https://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
and long standing example
https://www.garlic.com/~lynn/2001h.html#61

aka there are various scenarios that effectively only need knowledge of the account number to perform fraudulent transactions ... and the account number has to be widely and readily available because of its use in a broad range of standard business processes. because of the broad range of business processes requiring availability of the account number ... it is difficult to secure it using "privacy" or "confidentiality" ... aka from security PAIN acronym
P ... privacy
A ... authentication
I ... integrity
N ... non-repudiation


application of cryptography technology for privacy/confidentiality security isn't a very effective solution because of the wide-spread requirement for account number availability in numerous business processes. X9.59 approach was to apply authentication security (in lieu of privacy security) as a solution to fraudulent mis-use of account numbers obtained from skimming, phishing, data breaches, etc.

mixing authentication and identification?

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: mixing authentication and identification?
Date: Thu, 13 Oct 2005 23:32:01 -0600
To: cryptography@xxxxxxxx
FFIEC Guidance; Authentication in an Internet Banking Environment
http://www.fdic.gov/news/news/financial/2005/fil10305.html

The Federal Financial Institutions Examination Council (FFIEC) has issued the attached guidance, "Authentication in an Internet Banking Environment", For banks offering Internet-based financial services, the guidance describes enhanced authentication methods that regulators expect banks to use when authenticating the identity of customers using the on-line products and services. Examiners will review this area to determine a financial institution¡Çs progress in complying with this guidance during upcoming examinations. Financial Institutions will be expected to achieve compliance with the guidance no later than year-end 2006.

... snip ...

and some comments from some other thread:
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance

and ...

Uncle Sam Comes Knocking
http://www.banktechnews.com/article.html?id=20051003PV1B6000

The U.S. government's interest in having banks help it form networks of federated identity, thus allowing vetted, shared user-credentials between government and private industry, is obvious. So far, only four banks-Wells Fargo, Wachovia, National City and an unidentified bank-have signed on, though more are expected to join Uncle Sam's E-Authentication Federation. The looming question: What's in it for banks?

... snip ...

Some thoughts on high-assurance certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Some thoughts on high-assurance certificates
Date: Mon, 31 Oct 2005 10:13:26 -0700
To: Peter Gutmann <pgut001@xxxxxxxx>
CC: cryptography@xxxxxxxx
Peter Gutmann wrote:
And therein lies the problem. The companies providing the certificates are in the business of customer service, not of running FBI-style special background investigations that provide a high degree of assurance but cost $50K each and take six months to complete. The same race to the bottom that's given us unencrypted banking site logons and $9.95 certificates is also going to hit "high-assurance" certificates, with companies improving customer service and cutting customer costs by eliminating the (to them and to the customer) pointless steps that only result in extra overhead and costs. How long before users can get $9.95 pre-approved high-assurance certificates, and the race starts all over again?

when we were doing this stuff for the original payment gateway ...
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we had to also go around and audit some number of these relatively (at the time) brand new organizations called certification authorities ... issuing these things called digital certificates.

we listed a large number of things that a high assurance business service needed to achieve (aka explaining that the ceritification authority business was mostly a business service operation). at the time, several commented that they were started to realize that ... it wasn't a technically oriented local garage type operation ... but almost totally administrative, bookkeeping, filing, service calls ... etc (and from an operational standpoint nearly zero technical content). most of them even rasied the subject about being able to outsource their actual operations.

the other point ... was that the actual design point for digital certificates ... were the providing of certified information for offline relying parties ... i.e. relying parties that had no means of directly accessing their own copy of the certified information ... and/or it was an offline environment and could not perform timely access to the authoritative agency responsible for the certified information.

as the online infrastructure became more and more pervasive ... the stale, static, digital certificates were becoming more & more redundant, superfluous and useless. in that transition, there was some refocus by certification authority from the offline market segment of relying parties (which was rapidly disappearing as the online internet became more and more pervasive) to the no-value relying party market segment ... aka those operations where the operation couldn't cost justify having their own copy of the certified information AND couldn't cost justify performing timely, online operations (directly contacting authoritative agency responsible for certified information). even this no-value market segment began to rapidly shrink as the IT cost rapidly declined of maintaining their own information and the telecom cost of doing online transactions also rapidly declined.

while the attribute of "high-assurance" can be viewed as a good thing ... the issue of applying it to a paradigm that was designed for supplying a solution for an offline environment becames questionable in a world that is rapidly becoming online, all-the-time.

it makes even less sense for those that have migrated to the no-value market segment ... where the parties involved that aren't likely to cost justify online solutions ... aren't also likely to find that they can justify costs associated with supporting a high-assurance business operation.

part of the issue here is the possible confusion of the business process of certifying information and the digital certificate business operation targeted at representing that certified information for relying parties operating in an offline environment .... and unable to perform timely operations to directly access the information.

this can possibly be seen in some of the mid-90s operations that attempted to draw a correlation between x.509 identity digital certificates and drivers licenses ... where both were targeted as needing sufficient information for relying parties to perform operations ... solely relying on information totally obtained from the document (physical driver's license or x.509 identity digital certificate). there was some migration away from using the driver's license as a correlary for x.509 identity digital certificates ... as you found the majority of the important driver's license relying operations migrating to real-time, online transactions. a public official might use the number on the driver's license purely as part of a real-time online transaction ... retrieving all the actual information ... and not needing to actually rely on the information contained in the driver's license at all. it was only for the relatively no-value operations that the information in the physical drivers license continued to have meaning. any events involving real value were all quickly migrating to online, real-time transactions.

misc. collected postings on assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

Some thoughts on high-assurance certificates

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Some thoughts on high-assurance certificates
Date: Tue, 01 Nov 2005 15:07:54 -0700
To: Ed Reed <ereed@xxxxxxxx>
CC: Peter Gutmann <pgut001@xxxxxxxx>,  cryptography@xxxxxxxx
Ed Reed wrote:
Getting PKI baked into the every day representations people routinely manage seems desirable and necessary to me. The pricing model that has precluded that in the past (you need a separate PKi certificate for each INSURANCE policy?) is finally melting away. We may be ready to watch the maturation of the industry.

as part of some work on cal. & fed. e-signature legislation ...
https://www.garlic.com/~lynn/subpubkey.html#signature
one of the industry groups involved was the insurance industry. rather than PKI certificates, there was some look at real-time, online transactions ... where the liability was calculated on the basis of each individual transaction.

The PKI certification model ... somewhat is paradigm for the letters of credit offline scenario from the sailing ship days. in the modern world ... that somewhat states that the certificate is issued for a period of time ... possibly one year ... and theoretically covers all operations that might occur during the period of that year ... ragardless of the number of operations that might be involved during that period ... where each operation carried liability. in the online scenario ... rather than having a stale, static certificate that carried with it implied liability for the period of time, independent of the number of operations ... each individual operation was a separate liability operation.

one could imagine insurance on a large tanker for a period of a year with regard to sinking. that translation to an electronic world ... would be that the tanker would have an arbitrary number of sailings ... and could sink on each sailing ... and having sunk on a previous sailing ... wouldn't prevent it from its next assignment and sinking again.

the "insurance" in the credit card industry is that there is an online operation for each transaction ... and each transaction involves the merchant being charged a value proportional the transaction value. the liability is taken on each online transaction ... rather than for a period of time ... regardless of the number or magnitude of the transactions.

this is somewhat with respect to my previous reply
https://www.garlic.com/~lynn/aadsm21.htm#20Some thoughts on high-assurance certificates
that the certification and assurance of the certification can be independent of the way that certification is represented. in the past, for the offline world ... having a stale, static certificate representing that certification was useful ... because there was no way of obtaining real-time, online certification information. with ubuquitous online availability, there has been more and more transition to real-time online certification represwentation especially as the values involved increases (frequently the real-time, online certification representation can involve higher quality and/or more complex information ... like real-time aggregated information ... which is rather difficult with a stale, static represnetation creaed at some point in the past)

Broken SSL domain name trust model

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Broken SSL domain name trust model
Date: Mon, 28 Nov 2005 04:39:10 -0700
To: cryptography@xxxxxxxx
so this is (another in long series of) post about SSL domain name trust model
https://www.garlic.com/~lynn/2005t.html#34

basically, there was suppose to be a binding between the URL the user typed in, the domain name in the URL, the domain name in the digital certificate, the public key in the digital certificate and something that certification authorities do. this has gotten terribly obfuscated and looses much of its security value because users rarely deal directly in actual URLs anymore (so the whole rest of the trust chain becomes significantly depreciated).

the contrast is the PGP model where there is still a direct relationship between the certification the user does to load a public key in their trusted public key repository, the displayed FROM email address and the looking up a public key using the displayed FROM email address.

the issue isn't so much the PGP trust model vis-a-vis the PKI trust model ... it is the obfuscation of the PKI trust model for URL domain names because of the obfuscation of URLs.

so one way to restore some meaning in a digital signature trust model is to marry some form of browser bookmarks and PGP trusted public key repository. these trusted bookmarks contain both some identifier, a url and a public key. the use has had to do something specific regarding the initial binding between the identifier, the url and the public key. so such trusted bookmarks might be

1) user clicks on the bookmark, and a psuedo SSL/TLS is initiated immediately by transmitting the random session key encrypted with the registered public key. this process might possible be able to take advantage of any registered public keys that might be available from security enhancements to the domain name infrastructure

2) user clicks on something in the web page (icon, thumbnail, text, etc). this is used to select a bookmark entry ... and then proceeds as in #1 above (rather than used directly in conjunction with a URL and certificate that may be supplied by an attacker).

there are other proposals that try and collapse the obfuscation between what users see on webpages and the actual security processes (trying to provide a more meaningful direct binding between what the user sees/does and any authentication mechanism) ... but most of them try and invent brand new authentication technologies for the process.

digital signatures and public keys are perfectly valid authentication technologies .... but unfortunately have gotten terribly bound up in the certification authority business processes. the issue here is to take perfectly valid digital signature authentication process ... and create a much more meaningful trust binding for the end-user (not limited to solely the existing certification authority and digital certificate business models).

the issue in #2 is that the original electronic commerce trust process was that the URL initially provided by the user (typed or other means) started the trust process and avoided spoofed e-commerce websites. one of the problems has been that the SSL security has so much overhead, that e-commerce sites starting reserving it just for payment operation. As a result, users didn't actually encounter SSL until they hit the checkout/pay button. Unfortunately if you were already at a spoofed site, the checkout/pay button would have the attacker providing the actual URL, the domain name in that URL, and the SSL domain name certificate.

so the challenge is to drastically reduce the obfuscation in the existing process ... either by providing a direct mechanism under the user control for getting to secure websites or by doing something that revalidates things once a user is at a supposedly secure webstie.

the issue is that if users start doing any pre-validation step and storing the results ... possibly something like secure bookmarks ... then it becomes farily straight-forward to store any related digital certificates along with the bookmark entry. if that happens, then it becomes obvious that the only thing really needed is the binding the user has done between the public key in the digital certificate and the bookmark entry. at that point, it starts to also become clear that such digital certificates aren't providing a lot of value (being made redundant and superfluous by the trust verification that the user has done regarding the various pieces of data in the entry).

in effect, the PKI model is based on premise that it is a substitute where the relying party isn't able to perform any trust validation/operations themselves (i.e. the letters of credit/introduction model from the sailing ship days). when the relying parties have to go to any of their own trust operations, then there is less reliance and less value in the trust operations performed by certification authorities.

Broken SSL domain name trust model

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Broken SSL domain name trust model
Date: Thu, 01 Dec 2005 08:40:36 -0700
To: leichter_jerrold@xxxxxxxx
CC: cryptography@xxxxxxxx
ref:
https://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model

leichter_jerrold@xxxxxxxx wrote:
One can look at this in more general terms. For validation to mean anything, what's validated has to be the semantically meaningful data - not some incidental aspect of the transaction. The SSL model was based on the assumption that the URL was semantically meaningful, and further that any other semantically meaningful data was irreversibly bound to it, so that if the URL were valid, anything you read using that URL could also be assumed to be equally valid.

This fails today in (at least) two different ways. First, as you point out, URL's are simply not semantically meaningful any more. They are way too complex, and they're used in ways nothing like what was envisioned when SSL was designed. In another dimension, things like cache poisoning attacks lead to a situationd in which, even if the URL is valid, the information you actually get when you try to use it may not be the information that was thought to be irreversibly bound to it.

Perhaps the right thing to do is to go back to basics. First off, there's your observation that for payment systems, certificates have become a solution in search of a problem: If you can assume you have on-line access - and today you can - then a certificate adds nothing but overhead.

The SSL certificate model is, I contend, getting to pretty much the same state. Who cares if you can validate a signature using entirely off-line data? You have to be on-line to have any need to do such a validation, and you form so many connections to so many sites that another one to do a validation would be lost in the noise anyway.

Imagine an entirely different model. First off, we separate encryption from authentication. Many pages have absolutely no need for encryption anyway. Deliver them in the clear. To validate them, do a secure hash, and look up the secure hash in an on-line registry which returns to you the "registered owner" of that page. Consider the page valid if the registered owner is who it ought to be. What's a registered owner? It could be the URL (which you never have to see - the software will take care of that). It could be a company name, which you *do* see: Use a Trustbar-like mechanism in which the company name appears as metadata which can be (a) checked against the registry; (b) displayed in some non- alterable form.

The registry can also provide the public key of the registered owner, for use if you need to establish an encrypted session. Also, for dynamically created pages - which can't be checked in the registry - you can use the public key to send a signed hash value along with a page.

Notice that a phisher can exactly duplicate a page on his own site, and it may well end up being considered valid - but he can't change the links, and he can't change the public key. So all he's done is provide another way to get to the legitimate site.

The hash registries now obviously play a central role. However, there are a relatively small number of them and this is all they do. So the SSL model should work well for them: They can be *designed* to match the original model.


this can basically be considered a form of extended DNS providing additinal authentication ... secure DNS is one such proposal for repository of public keys ... but the DNS model of online information repository can be used for a variety of information.

this is also my oft repeated scenario of the ssl domain name certification authorities needing secure DNS ... because when processing an SSL domain name certificate request ... they have to check with the domain name infrastructure as to the true owner of the domain name. this currently is an expensive, time-consuming and error-prone process of matching identification supplied with the request against identification information on file with the domain name infrastructure. on the other hand, if public keys of domain name owners were on file with the domain name infrastructure ... the domain name infrastructure uses digitally signed communication (validating with the onfile public keys) to eliminate some of their existing integrity problems (which, then in turn, improves the integrity of any ssl domain name certificate based on information at the domain name infrastructure registry). the registered public keys also allow the certification authorities to turn the expensive, time-consuming and error-prone identification process into a much less expensive, simpler, and more reliable authentication process ... by requiring ssl domain name name certificate requests to be digitally signed by the domain name owner (and validated with the onfile public keys).

misc. collected posts on ssl certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

Broken SSL domain name trust model

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Broken SSL domain name trust model
Date: Thu, 01 Dec 2005 23:01:31 -0700
To: leichter_jerrold@xxxxxxxx
CC: cryptography@xxxxxxxx
leichter_jerrold@xxxxxxxx wrote:
One can look at this in more general terms. For validation to mean anything,

what's validated has to be the semantically meaningful data - not some incidental aspect of the transaction. The SSL model was based on the assumption that the URL was semantically meaningful, and further that any other semantically meaningful data was irreversibly bound to it, so that if the URL were valid, anything you read using that URL could also be assumed to be equally valid.


note that the other possible semantic confusion is referring to them as certificate authorities ... rather than certification authorities.

they happen to distribute certificates which are a representation of the certication. however, there are some number of the certification authorities that

1) aren't the actual authoritative agency for the information being certified i.e. the certification authority is just checking with the real authoritative agency as to the validity of the information

2) many appear to actually prefer to just do certificate manufacturing ... a term we coined when we were doing audits of these new organizations called certification authorities ... back when we were consulting with the new client/server startup on something that has come to be called electronic commerce.

of course the issue has always been that if you can do real-time, online certification it has no lower value than a stale, static, offline-oriented certificate. the business model tends to be further aggravated by the fact that most of the certification authorities aren't actually the authoritative agency for the information being certified. it is highly likely that as online connectivity becomes more and more pervasive ... that people will start to realize the much higher value of having real-time, online certification. Since the majority of the certification authorities aren't actually the authoritative agency for the actual information, then any transition to high-value, real-time, online certification will tend to be done directly with the authoritative agency responsible for the actual informmation. at that point, most of the certification authorities become obsolete.

an obfuscation is to concentrate on the certificates as having magical properties, distinct from their representation of an information certifying business process. referring to them as certificate authorities helps create semantic confusing as to where the business process value actual exists. similarly there have articles in the popular press referring to attached digital crrtificates as what provides the value to any digitally signed message/document ... further obfuscating the value of authentication can be done with digital signatures with online registered public keys (where any digital certificates become totally redundant and superfluous).

the other problem/issue with requiring x.509 identity certificates on every digitally signed message/document .... is that it turns what should be straight-forward, simple authentication operation into a heavy duty identification operation.

this has also tended to cause semantic confusion as well as something of a schizo personality in some societies; especially those professing extremely stringent privacy principles and at the same time trying to mandate x.509 identity certificates attached to every electronic communication (making every electronic message an identification operation).

misc. past posts referring to semantic confusion:
https://www.garlic.com/~lynn/aadsm3.htm#kiss5 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm19.htm#7 JIE - Contracts in Cyberspace
https://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#25 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm20.htm#8 UK EU presidency aims for Europe-wide biometric ID card
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the security challenges
https://www.garlic.com/~lynn/2003k.html#6 Security models
https://www.garlic.com/~lynn/2004i.html#27 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005f.html#20 Some questions on smart cards (Software licensing using smart cards)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine authentication
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2005q.html#4 winscape?
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance

Broken SSL domain name trust model

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: Broken SSL domain name trust model
Date: Sun, 04 Dec 2005 12:12:33 -0700
To: Ian G <iang@xxxxxxxx>
CC: cryptography@xxxxxxxx
Ian G wrote:
This proposal is more or less similar to that of the 'secure bookmarks' of Marc Stiegler which is essentially a nice metaphor for the YURLs of Tyler Close which is essentially a packaging of capabilities in URLs from the entire school of POLA/caps.

ref;
https://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#23 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#24 Broken SSL domain name trust model

for minor historical note & topic drift ... eros is something of a follow-on to keykos ... which is a rename/spin-off of gnosis when m/d bought tymshare. i was brought in to do the gnosis evalution as part of the spin-off process. (i even have some old gnosis paper documents someplace)

eros
http://www.eros-os.org/

has now moved on to:
http://www.capros.org/
http://www.coyotos.org/

and some keykos reference
http://www.agorics.com/Library/keykosindex.html
http://www.agorics.com/Library/KeyKos/keysafe/Security.html

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Wed, 07 Dec 2005 13:34:58 -0700
To: <aramperez@xxxxxxxx>
CC: cryptography <cryptography@xxxxxxxx>
Aram wrote:
I'm sorry James, but you can't expect a (several hundred dollar) rowboat to resist the same probable storm as a (million dollar) yacht. There is no such thing as "one-size encryption system fits all cases".

unfortunately there are more than a few counter-examples that are made enormously complex (and extremely expensive) and it turns out that the complexity itself introduces additional failure and threat vulnerabilities which aren't found in KISS-solutions.

nearly ten years ago, i joked that i was going to take a $500 milspec part, cost reduce it by two orders of magnitude and at the same time improve its security (in part by eliminating unnecessary features that contributed to security vulnerabilities).

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Wed, 07 Dec 2005 14:45:14 -0700
To: Ed Gerck <edgerck@nma.com>
CC: cryptography@xxxxxxxx
Ed Gerck wrote:
Depends on your use. An X.509 identity cert or a PGP cert can be made as secure as you wish to pay for. The real question, however, that is addressed by the paper is how useful are they in terms of email security? How do you compare them and which one or which product to choose from? What are the trade-offs?

i've periodically written on security proportional to risk ... small sample
https://www.garlic.com/~lynn/2001h.html#61

then the issue is what security are you interested in and what are the threat models and corresponding countermeasures.

in the security pain model
P .. privacy
A .. authentication
I .. integrity
N .. non-repudiation


you may need authentication and integrity (say from digital signature) but not necessarily privacy/confidentiality.

in normal ongoing email, there is a lot of repeated stuff and/or out-of-band stuff ... that makes certificates redundant and superfluous ... they are targeted at the letters of credit/introduction paradigm from the sailing ship days. certificates basically are representations of some certifying process performed by a certification authority. the integrity and security of the certificate itself may have absolutely nothing to do with the integrity and security of the certification business process ... minor drift in sci.crypt
https://www.garlic.com/~lynn/2005u.html#9 PGP Lame question

furthermore, the whole complexity and series of processes involved in a PKI-based infrastructure may have the certificates themselves totally redundant and superfluous because the recipient has numerous other indicators that they know who it is that they are dealing with. the introductioin of PKI and certificates in such an environment may actually create greater vulnerabilities ... since it may convince the recipient to trust the PKI operation more than they trust their own, direct knowledge ... and the PKI operation opens up more avenues of compromise for the attackers.

... there is even a slightly related article that i ran across yesterday:
An Invitation to Steal; The more you automate your critical business processes, the more vigilant you need to be about protecting against fraud
http://www.cio.com.au/index.php/id;1031341633;fp;4;fpid;18


.....

the other issue in digital signature based operation is that it is a part of 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor where the fundamental linchpin for the whole operation is the protection and confidentiality of the private key. unfortuantely almost all digital signature operations tend to talk about the integrity and security of the PKI operation and its certificates ... when they should be talking about the integrity and security of the private keys and the integrity and security of the digital signing environment.

i've sporadically gone so far as to assert that the focus on the integrity and security of PKI and certificates results in obfuscating the fundamental integrity and security issues with private keys and digital signing environments (aka long before anybody is talking about the integrity of the certificates ... they should have resolved that the private keys are only available in hardware tokens of known and specific integrity characteristics).

The whole PKI and certificate operation having a design point of resolving first time interaction between complete strangers (as in the letters of credit/introduction paradigm from sailing ship days) and should come after the basic underlying infrastructure isssues involving trusted communication between two entities has first been resolved (whether it is first time communication between complete strangers or not ... which might then be layered on top of a sound infrastructure that has its fundamental operations already resolved).

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Thu, 08 Dec 2005 14:41:26 -0700
To: Ed Gerck <edgerck@nma.com>
CC: cryptography@metzdowd.com
Ed Gerck wrote:
Regarding PKI, the X.509 idea is not just to automate the process of reliance but to do so without introducing vulnerabilities in the threat model considered in the CPS.

ref:
https://www.garlic.com/~lynn/aadsm21.htm#27 X.509 / PKI, PGP, and IBE Secure Email Technologies

but that is one of the points of the referenced article (in the previous postings about "Invitation to Steal") that as you automate more things you have to be extra careful about introducing new vulnerabilities (of course a business operation will make claims that while they may have introduced enormous additional complexity and number of business processes ... that they are all perfect and have no vulnerabilities).

the issue of public key email w/o PKI ... is you have all the identical, same basic components that PKI also needs.

there is a local trusted public key repository and a method of getting keys into/out of that trusted public key repository. in the non-PKI case, the trusted public key repository contains public keys that are used to directly authenticate messages from other entities. in the PKI case, the trusted public key repository also contains public keys that are used to authenticate messages from a certification authority; these messages are called digital certificates. the digital certificates, in turn contain other public keys that can be used in authenticating messages from directly communicating entities.

the original PKI and digital ceritificate design point is the letters of credit/introduction (from the sailing ship days) ... addressing first time communication between two strangers.

that a large volume of email doesn't involved first time communication between two strangers that have no prior relationship ... and so one possible question is does a PKI operation ... does the little or no added value for such communication possibly offset the drastically increased amount of complexity and increased number of business processes (that also contribute to possible enormous increase in potential for vulnerabilities).

PKI is trying to offer some added value in first time communication between two strangers (say the bulk mailing advertising industry) ... and it is possibly acceptable the significant increase in business processes and complexity is justified in improving reliance in the bulk mailing advertising market segment. The question does the vast increase in business processes and complexity (with the possibility that the increased business processes and complexity also introduce significant new types of vulnerabilities) justify its use in the scenarios where first time communication between two strangers is not involved.

This is business process analysis of what goes on in a basic public key email operation ... aka all the public key operations and the entity's trusted public key repository ... and then showing where PKI incrementally adds business processes and complexity to that basic infrastructure .... certification authority public keys added to the trusted public key repository, these new kind of messages called digital certificates and the indirection between the certification authority's public key (in the entity's trusted public key repository) and the public key of the other entities communicated with.

The additional digital certificate verification technical steps that a PKI operation adds to a core fundamental public key email process (that directly has access to public keys of entities directly communicated with) ... also drags in the enormous amount of complexity and additional business processes that the certification authorities have to perform.

It is some of this other complexity and business processes that may be attacked ... as in my oft repeated description of a crook attacking the authoritative agency that a certification authority uses for the basis of its certification, and then getting a perfectly valid certificate. The user (relying-party) then may have a perfectly valid public key for an entity that they've communicated with for years .... but this perfectly valid certificate (from a crook) now claims that the user must now automatically accept the crook's public key also as representing the same entity.

so a traditional risk/threat analysis ... would frequently analyze the basic components ... establish a baseline threat/vulnerability profile ... and then consider what happens when additional complexity is added to the baseline. I assert that a simple public key email operation can establish a baseline w/o any digital certificates ... and then you consider what happens when the baseline has digital certificates added (which then also drags in all the business process vulnerabilities that may exist at the certification authority ... and all dependencies that tthe certification authority has). we had to sort of look at this sort of stuff when we were asked to work with this small client/server startup that wanted to do payment transactions on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and we had to go around and audit some number of these relatively new business operations called certification authorities.

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Fri, 09 Dec 2005 14:53:02 -0700
To: Ed Gerck <edgerck@xxxxxxxx>
CC: cryptography@xxxxxxxx
Ed Gerck wrote:
I believe that's what I wrote above. This rather old point (known to the X.509 authors, as one can read in their documents) is why X.509 simplifies what it provides to the least possible _to_automate_ and puts all the local and human-based security decisions in the CPS. (The fact that the CPS is declared to be out of scope of X.509 is both a solution and a BIG problem as I mentioned previously.)

i like the explanation that some attempted to give at the acm sigmod conference in san jose (circa 1992) .... of what was going on in the x.5xx standards activities; ... a bunch of network engineers trying to re-invent 1960s database technology ...

misc. database related postings, mostly related to work on the original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

and/or working on scallable distributed lock manager
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/subtopic.html#hacmp

the x.509 digital certificates being a stale, static cachable entry of something in x.500 ldap database ... that was armored for survival in potentially hostile environment and for relying parties that didn't have ability to access the real database entry.

cps was something that was needed for trusted third party certification authority operation ... not for x.509 identity certificate itself. the issue is when you effectively have these stale, static cacheable, armored database entries that aren't part of an organization and business processes that relying parties belong to. traditional access to database entries (whether you are directly accessing the entry or a stale, static cached copy of the database entry) ... the business processes accessing the data and the businesses responsible for the data are part of the same operation and/or belong to organizations that have binding contractual relationships.

it is only when you have parties responsible for the information (trusted third party certification authorities) that are 1) totally different from the parties relying on the information and/or 2) the different parties have no contractual relationships.

one could hypothesize that the creation of CPS were to provide some sort of substitute for contractual relationship between different organizations/parties where the relying party has no means of directly accessing the information and must rely on a stale, static digital certificate representation (of that information), provided by an organization that the relying party has no contractual relationship (just claiming to be a trusted third party certification authority possibly wasn't enough of a sense of security for some relying parties and so CPS were invented to provide relying parties a higher sense of comfort in lieu of having something like an actual contractual relationship).

that makes CPSs a substitute for contractual relationships when x.509 digital certificates are used for trusted third party certification authorities where the relying parties and the TTP/CAs are different organizations (and have no contractual relationship).

X.509 / PKI, PGP, and IBE Secure Email Technologies

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Fri, 09 Dec 2005 14:59:56 -0700
To: Ed Gerck <edgerck@xxxxxxxx>
CC: cryptography@xxxxxxxx
Ed Gerck wrote:
PGP is public-key email without PKI. So is IBE. And yet neither of them has all the identical, same basic components that PKI also needs. Now, when you look at the paper on email security at
http://email-security.net/papers/pki-pgp-ibe.htm you see that the issue of what components PKI needs (or not) is not relevant to the analysis.


usually when you are doing baseline ... you start with the simplest, evaluate that and then incrementally add complexity. in that sense PGP is much closer to the simplest baseline ... and PKI becomes added complexity ... inverting you classification; email PKI is PGP with digital certificates added.

you then could add various layers of public key operation where the relying parties have direct access to the information in one way or another and therefor don't require stale, static, armored cached copies (digital certificate) of the real information.

then you can go thru numerous layers of PKI ... are the relying parties and the digital certificate creators part of the same business organizations ... and therefor require neither contractual relationship and/or CPS as a substitute for contractual relationship.

then add trusted third party certification authority PKI ... where the relying parties and the certification authorities have direction contractual relationship and thefore don't require CPS as a substitute for contractual relationship.

it is when you get to trusted third party certification authority PKI ... where the relying parties and the ttp/ca are part of totally different business operations and have no contractual relationship that you then get into the issue of how does a relying party actually know than it should be trusting a ttp/ca.

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Fri, 09 Dec 2005 15:06:34 -0700
To: James A. Donald <jamesd@xxxxxxxx>
CC: cryptography@xxxxxxxx
James A. Donald wrote:
However, the main point of attack is phishing, when an outsider attempts to interpose himself, the man in the middle, into an existing relationship between two people that know and trust each other.

in the public key model ... whether it involves pgp, pki, digital certificates, what-ever; the local user (relying party) has to have a local trusted repository for public keys. in the pki model, this tends to be restricted to public keys of certification authorities ... so that the relying party can verify the digital signature on these message/document constructs called digital certificates.

in the traditional, ongoing relationship scenario, relying parties directly record authentication information of the parties they are dealing with. if a relying party were to directly record the public key of the people they are communicating with ... it is the trusting of that public key and the validating of associated public key operations that provide for the countermeasure for man-in-the-middle attacks and phishing attacks.

the issue that has been repeatedly discussed is that supposedly the existing SSL domain name digital certificates was to prevent impersonation and mitm-attacks. however, because of various infrastructure shortcomings ... an attacker can still operate with perfectly valid SSL domain name digital certificates ... and it doesn't stop the MITM-attack and/or phishing.

misc. collected postings mentioning mitm-attacks
https://www.garlic.com/~lynn/subintegrity.html#mitm

NSA posts notice about faster, lighter crypto

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: NSA posts notice about faster, lighter crypto
Date: Sat, 10 Dec 2005 14:34:36 -0700
To: cryptography@xxxxxxxx
NSA posts notice about faster, lighter crypto
http://www.fcw.com/article91669-12-09-05-Web

The National Security Agency wants federal agencies to consider using a group of algorithms it refers to as Suite B to satisfy future cryptographic requirements. Suite B contains NSA-approved cryptographic algorithms of various key sizes to protect classified and unclassified but sensitive information. NSA has posted a notice about Suite B on its Web site.

... snip ...

X.509 / PKI, PGP, and IBE Secure Email Technologies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Sat, 10 Dec 2005 15:04:43 -0700
To: Ed Gerck <edgerck@xxxxxxxx>
CC: cryptography@xxxxxxxx
Ed Gerck wrote:
I think that's where PKI got it wrong in several parts and not just the CPS. It started with the simplest (because it was meant to work for a global RA -- remember X.500?) and then complexity was added. Today, in the most recent PKIX dialogues, even RFC authors often disagree on what is meant in the RFCs. Not to mention the readers.

the baseline analysis, threat/vulnerability models, etc ... start with the simplest and then build the incremental pieces .... frequently looking at justification for the additional complexity.

when doing the original design and architecture you frequently start with the overall objective and do a comprehensive design (to try and avoid having things fall thru the cracks).

i would contend that the issue with PKI isn't as much that they started with simple and then did incremental piece-meal design (rather than complete, comprehensive design) ... but they actually did design something for a specific purpose ... and subsequently it was frequently tried to force-fit it for purposes for which it wasn't originally designed for.

for example the traditional business model tends to have relying parties contracting directly with the parties they rely on (and there is contractual obligation between the two parties). the evolution of the trusted third party certification authority model violates most standard business practices that have grown up over hundreds of years.

the trusted third party certification authority is selling digital certificates to key owners for the benefit of relying parties. there is a large disconnect where the parties that are supposedly going to rely on and benefit from the digital certificates aren't the ones contracting for the digital certificates. this disconnect from standard business practices can be considered the motivating factor for the invention of CPS ... even tho there may not be a direct business and contractual relationship between the relying parties and the certification authorities ... a CPS tries to fabricate a sense of comfort for the relying parties. A contractual relationship would otherwise provide for this sense of trust... the relying party pays the certification authority for something, and the certification authority then has some obligation to provide something in return to the relying party. In most trusted third party certification authority operations there is no legal, business or otherwise binding relationship between the relying party and the TTP/CA ... it is between the key owner and the TTP/CA.

This could be further aggravated by RFC authors who possibly have no familiarity with standard business practices and attempt to write something just because they want it to be that way.

Another example could be considered OCSP. Basically digital certificates are stale, static, r/o, armored copies of some information located someplace. A business process model has relying parties, relying on the information in stale, static, r/o copies of the information because they have no means for directly accessing the real, original information (basically the letters of credit/introduction from sailing ship days ... aka offline with no local resources). OCSP provides for a online transaction which asks whether the stale, staic information is still usuable, attempting to preserve the facade that digital certificates serve some useful purpose when there is online, direct access capability. The alternative is to eliminate the digital certificates all together and rather than doing an OCSP transaction, do a direct, online transaction.

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Sun, 11 Dec 2005 10:28:31 -0700
To: James A. Donald <jamesd@echeque.com>
CC: cryptography@metzdowd.com
James A. Donald wrote:
This was the scenario envisaged when PKI was created, but I don't see it happening, and in fact attempting to do so using existing user interfaces is painful. They don't seem designed to do this.

My product, Crypto Kong,
http://echeque.com/Kong was designed to directly support this scenario in a more convenient fashion - it keeps a database of past communications and their associated keys, but there did not seem to be a lot of interest. I could have made it more useful, given it more capabilities, but I felt I was missing the point


i've seen some discussions that were either/or regarding pki & pgp; aka pki advocates attempting to position pki as a OR to pgp. the issue is that both pki and pgp require a local trusted public key repository as the basis for establishing trust.

pki then layers on it, these certification authority business processes, specialized digitally signed messages called digital certificates, etc to address first time communication between strangers where the relying parties have no other resources for determining information about the sender in an offline environment. they then advocate that all (personally) digitally signed operations are required to have attached x.509 identity digital certificates that has been digitally signed by a certification authority.

we saw some of that when we did work on the cal. state & fed. electronic signature legislation
https://www.garlic.com/~lynn/subpubkey.html#signature

one possible interpretation might be that it would have increased the revenue stream for the certification authority industry.

drastically improving the useability of the interface to the trusted public key repositories could be viewed as having two downsides 1) certification authorities that haven't payed to have their public keys preloaded can more easily join the club, 2) the pgp-like scenario becames much easier, potentially drastically reducing existing reliance on the digital-certificate-only (and certification authority only business process) digital-signed-operation model.

part of the problem with the trusted third party certification authority model is that its primary benefit in the case of first time communication betweeen two strangers ... where the relying party has no other recourse to information about the other party. this is actually an extremely small percentage of communications. we saw some of this working on the original e-commerce activity
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

where we layed out hypothetical certification issues for merchants ... including things like having FBI background checks for all merchant employees. the problem is that e-commerce transactions have been quite bi-model whith the largest percentage of transaction occuring as repeat business with well-known merchants. in these cases, consumers have established trust via a large number of other mechanisms ... so there is little added value that a certification authority can provide ... especially if they aren't willing to stand-behind and guarantee all merchant transactions (ssl then is primarily countermeasure to mitm-attack and evesdropping on transaction as opposed to a certification/trust issue).

the rest of the remaining transaction are spread around to a large number of different merchants having a few transactions each. the issue here is that the incremental revenue flow for a few transactions a month couldn't possibly cover the cost of a certification process that involved things like fbi background checks on all merchant employees.

the large majority of transactions are either repeat business and/or with extremely well-known merchants ... this doesn't satisfy the PKI target profile of first time communication between complete strangers. simple countermeasure to mitm-attack and countermeasure is achieved by having stored the merchant's public key (from the consumer's standpoint).

from the merchant standpoint they already have transaction guarantees on credit card processing from their contract with financial institution. the threat/vulnerability model here is client-originated fraudulent transactions that aren't strongly authenticated. here, x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

allows for digitally signed transaction where the public key is registered with the consumer's financial institution and the digital signature is verified by the consumer's financial institution as part of verifying the transaction.

the other part of x9.59 standard is that it specifies that account numbers used in x9.59 transactions can't be used in non-authenticated transactions. this eliminates both data breaches and evesdropping as a threat/vulnerability for fraudulent financial transactions ... aka the requirement given the x9a10 working group for x9.59 standard was to preserve the integrity of the financial infrastructure for all retail payments. if data breaches and evedsdropping no longer can result in fraudulent transactions, then there is much less reason for sophisticated countermeasures for those threat/vulnerabilities (ssl is no longer needed to prevent evesdropping on the account number, since evesdropping on the account number no longer provides any practical fraudulent benefit).

simple public key registration as part of financial account operation (in an onging relationship that a consumer has with their financial institution) not only is a certificate-less digital signature model
https://www.garlic.com/~lynn/subpubkey.html#certless

but also eliminates much of the requirement for existing major use of digital certificates; that of providing ssl encrypted communication
https://www.garlic.com/~lynn/subpubkey.html#sslcert

as countermeasure for evesdropping for the purpose of account number havesting
https://www.garlic.com/~lynn/subintegrity.html#harvest

furthermore, not only does simple x9.59 digital signature authenticated transactions eliminate the threat/vulnerability of evesdropping for account number harvesting, but it also eliminates the threat/vulnerability of data breaches for account number harvesting aka the harvesting could still go on, but the threat/vulnerability of fraudulent transactions as a consequence of harvesting is eliminated ... note that phishing attacks for the purpose of account number harvesting is also eliminated as a threat/vulnerability ... phishing can still go on, account numbers cna still be harvested, the account numbers are usable for fraudulent transactions w/o the digital signature.

misc. past posts mentioned bi-model transaction distribution and/or having suggested employee fbi background checks as part of merchant certification process.
https://www.garlic.com/~lynn/aadsm6.htm#terror3 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aepay10.htm#83 SSL certs & baby steps
https://www.garlic.com/~lynn/2001j.html#5 e-commerce security????
https://www.garlic.com/~lynn/2001j.html#54 Does "Strong Security" Mean Anything?

[Clips] Banks Seek Better Online-Security Tools

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Clips] Banks Seek Better Online-Security Tools
Date: Tue, 13 Dec 2005 10:54:57 -0700
To: Peter Clay <pete@flatline.org.uk>
CC: Florian Weimer <fw@deneb.enyo.de>, cryptography@metzdowd.com
Peter Clay wrote:
Hmm. What's the evidence that national ID schemes reduce credit fraud (what people normally mean when they say "ID theft")? How does it vary with the different types of scheme?

I've been opposing the UK scheme recently on the grounds of unreliable biometrics and the bad idea of putting everyone's information in a basket from which it can be stolen (in addition to the civil liberties reasons). My solution to the credit fraud problem is simple: raise the burden of proof for negative credit reports and pursuing people for money.


some number of organizations have come up with the term "account fraud" ... where fraudulent transactions are done against existing accounts ... to differentiate from other forms of "identity theft" which involves things like using a stolen identity to establish new accounts.

account fraud just requires strong authentication applied consistently ... doesn't require identification ... although there are cases where identification is confused and is used as a supstitute for authentication. part of the issue of confusing identification for authentication ... is that it is typically quite a bit more privacy invasive than pure authentication.

browser vendors and CAs agreeing on high-assurance certificates

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: browser vendors and CAs agreeing on high-assurance certificates
Date: Sun, 18 Dec 2005 10:27:34 -0700
To: Steven M. Bellovin <smb@xxxxxxxx>
CC: cryptography@xxxxxxxx
Steven M. Bellovin wrote:
http://news.com.com/Browsers+to+get+sturdier+padlocks/2100-1029_3-5989633.html?tag=st.rn

The article is a bit long-winded and short on details, but the basic message is simple: too many CAs have engaged in a price- and cost-driven race to the bottom; there are thus too many certificates being issued that aren't really trustworthy. A group of CAs and browser vendors have been meeting; they've agreed on a set of standards for certificates that represent more checking by the CA. Browsers will be enhanced to display a different sort of notification -- for IE, a green address bar.


but this is consistent with my comments that as the offline market segment ... which was the original design point for certification authority ... starts to disappear ... as the internet becomes more and more ubquitous; then certification authorities have moved into the no-value market segment; aka that market segment that still couldn't justify either

1) cost of having their own local repository of communicating entities (even as cost of local computing and storage was rapidlty dropping)
and/or
2) even the drastically dropping cost of internet online operations couldn't be cost justified for whatever it was that they were doing.

this gets into rapidly downward spiral ... since was they moved more and more into the no-value market segment ... what the certification authorities were able to charge customers dropped ... as they lost price elasticity in what they could charge ... the revenue flow for supporting internal infrastructure and operation would start to dry up.

the other long standing comment with regard to original ssl domain name certificates was that this supposedly stacked out the e-commerce trust model. when we initially tried to set more stringent requirements for what could be checked as the basis for providing e-commerce trust ... we ran into strong bi-model environment

1) the majority of e-commerce transactions were done with a few widely known sites and/or sites that the client had repeatedly transacted with before. as a result, there were a large number of other trust vehicles and the these merchants felt it was not necessary to pay a large amount for significant certificate-based trust operation ... since their trust was being established by a wide-range of other mechanisms.

2) the vast majority of e-commerce sites did very few number of transactions each. this was the market segment involving e-commerce sites that aren't widely known and/or represents first time business. it is this market segment that is in the most need of trust establishment; however, it is this market segment that has the lowest revenue flow to cover the cost of creating a trust value.

there is actually a third issue for the vast numbers of low traffic e-commerce merchants ... the lack of trust can be offset by risk mitigation. it turns out that this market segment where there is poissble litte reason for the customer to trust the merchant has had a trust issues predating the internet ... at least going back to the introduction of credit financial transactions. as opposed to trust, risk mitigation was addressed in this period with things like reg-e and the customer having a high level of confidence that disputes tended to heavily favor the customer. this characteristics of risk mitigation, in lieu of trust, then carried over into the internet e-commerce relm.

somewhat as a result, the certification authorities weren't willing to insure and/or provide any guarantees ... and the e-commerce merchants weren't willing to pay certification authorities for such risk mitigation ... since they were already paying the financial institutions for such risk mitigation ... and there was no point in having redundant, superfluous, duplicated and/or replicated overhead costs.

so that effectively left the certification authorities (of the period) providing sense of confidence and trust ... not in the entity that clients were dealing with ... but purely some incremental sense of confidence that the URL that clients had typed in, was really getting them to the website that they thot they thot they were getting to. part of the problem here, was that there were extremely few fraud incidents involving people typing in a URL and getting redirected to a site other than the site indicated by the URL (the incremental trust value represented by having certificate-based certified information from a certification authority).

Even this exploit/countermeasure scenario was subverted when merchants decided that SSL was too expensive for the general shopping experience ... and was only needed for checkout/paying. In that emerging model (that is now widely prevalent), the merchant site provided a click-button that automatically generated the URL ... along with a certificate matching the URL. There was no longer checking of the URL provided by the customer ... there was only a certificate provided by the merchant that validated a URL provided by the merchant.

most of the sense of trust ... and/or at least a sense of well-bounded risk in e-commerce was provided by mechanisms that had predated internet e-commerce. the websites that had the lowest amount of trust (not widely known and/or repeat business; aka unknown, first time business) were the ones that could the least afford expensive certification process.

certification authorities were trying to 1) use a mechanism originally designed to provide trust in a offline environment which was a repidly disappearing market segment, 2) primarily provide incremental trust in a market segment that already had several well-established trust mechanisms ... which left them a very bounded market niche which didn't actually justify large revenue. The possible incremental trust and/or sense of safety provided by certification authorities was pretty well bounded in the environment ... and the market segment that had the highest need for incremental trust and sense of safety was the market segment with the lowest revenue flow per website.

a secondary factor was the certification authority price structure was effectively flat rate to all merchants. the trust and safety model from the financial infrastructure was much better business structured model. the financial infrastructure effectively provided insurance on every transaction ... the customer had a much, much higher sense of safety ... and the cost to the merchant was strictly proportional to their revenue. with the financial infrastructure already in the sense of safety market segment ... with effectively a product that had a significantly better business structure for both customers and merchant ... that significantly narrowed the trust&safety market segment open to certification authorities.

some misc past posts about certification authorities migration into the low/no value market segment
https://www.garlic.com/~lynn/aadsm12.htm#26 I-D ACTION:draft-ietf-pkix-usergroup-01.txt
https://www.garlic.com/~lynn/aadsm12.htm#27 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#55 TTPs & AADS (part II)
https://www.garlic.com/~lynn/aadsm13.htm#14 A challenge (addenda)
https://www.garlic.com/~lynn/aadsm16.htm#22 Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before
https://www.garlic.com/~lynn/aadsm17.htm#53 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm21.htm#24 Broken SSL domain name trust model
https://www.garlic.com/~lynn/2002p.html#22 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2004b.html#52 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004e.html#20 Soft signatures
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005k.html#29 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005k.html#60 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#1 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#23 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005o.html#40 Certificate Authority of a secured P2P network

browser vendors and CAs agreeing on high-assurance certificates

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: browser vendors and CAs agreeing on high-assurance certificates
Date: Sun, 18 Dec 2005 12:59:34 -0700
To: David Mercer <radix42@xxxxxxxx>
CC: James A. Donald <jamesd@xxxxxxxx>, cryptography@xxxxxxxx,
Steven M. Bellovin <smb@xxxxxxxx>,
David Mercer wrote:
Holy water indeed! As at least someone on this list doesn't seem to see that there is a 'too many true names' problem, here are some examples from the ssl sites I use (almost) daily. Second level domains changed to protect the guilty (and url's chopped for safety):

part of the issue is that certification authority trust model is attempting to equate internet routing names with business entity names .... something that they were never designed to do. it isn't so much that there are too many names ... but that business name operation and internet routing names were never designed to be used as the same thing (even for business operation names ... in the same jurisdiction, you may have a business organization with three different names ... where what is on the store front ... is different than what is registered at state business agency).

another part of the issue might be considered that effectively digital certificate paradigm (designed for offline operation in lieu of the replaying party having any other resources) comes down to the individual having to repeat the whole trust sequence on every cycle ... each operation resends the same certificate requiring that all the operations have to be repeated. this is in-turn predicated on the assumption that the user has no resources for online, real-time information and no local trusted memory (other than the local trusted public key repository where there are attempts to reserve for certification authority use only). the problem here is that it is long known that you run into trouble if you force the end-user to repeat the same operations over, and over, and over again ... until they become meaningless. in conjunction ... digital certificate operations (at least exposed to the end-user) have been forced to be more & more hidden and more & more trivial.

more consistent with long recognized human factors, is to have the end-user perform some sequence of recognizable trust operations once per site ... and then save the results of those operations for future use (like validating a public key and saving it in their local trusted public key repository) ... rather than forcing that ALL the trust operations have to be repeated on every interaction (which in-turn, forces what trust operations are performed to be more and more trivial as the repitition becomes more & more meaningless to the end-user).

browser vendors and CAs agreeing on high-assurance certificates

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: browser vendors and CAs agreeing on high-assurance certificates
Date: Sat, 24 Dec 2005 09:20:56 -0700
To: Ben Laurie <ben@xxxxxxxx>
CC: Ian G <iang@xxxxxxxx>,  leichter_jerrold@xxxxxxxx,
pgut001@xxxxxxxx,  cryptography@xxxxxxxx,
jamesd@xxxxxxxx,  smb@xxxxxxxx
Ben Laurie wrote:
If they share an IP address (which they must, otherwise there's no problem), then they must share a webserver, which means they can share a cert, surely?

this is a semantic nit ... certs are typically distributed openly and freely ... so potentially everybody in the world has free access to the same cert.

what operations need is the same access to a high-assurance private key. given that there is access to a high-assurance private key ... then it is possible to scaffold various other operations. some of the issues surrounding private key high-assurance may preclude having replicated private keys, restricting use to a single physical entity.

over ten years ago ... i helped a small isp set up a single server to host a larger number of different email domains ... which required doing several hacks/enhancements to sendmail.

the early onset of some of the leading search engines started out with multiple-A records for load balancing and availability (i.e. dns having single host/domain name with a list of different ip-addresses) ... where they rotated the ip-addresses in the list. as their traffic ramped up, this wasn't sufficient ... in part because the ip-address lists got cached in a large number of different places ... as static lists. this resulted in the evolution of boundary routers that responded to the set of published ip-addresses and internally kept track of activity to backend servers ... and dynamically spread the load across an increasing/growing number of backends.

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Mon, 26 Dec 2005 05:58:12 -0700
To: Ben Laurie <ben@xxxxxxxx>
CC: James A. Donald <jamesd@xxxxxxxx>, cryptography@xxxxxxxx
Ben Laurie wrote:
Eh? It surely does stop MitM attacks - the problem is that there's little value in doing so for various reasons, such as no strong binding between domain name and owner, UI that doesn't make it clear which domain you are going to, or homograph attacks.

ref:
https://www.garlic.com/~lynn/aadsm21.htm#31 X.509 / PKI, PGP, and IBE Secure Email Technologies

it stops the MITM attacks where the client supplies a URL and the server supplies a certificate that corresponds to the URL.

the original issue is that a MITM might have redirected the connection from the client to a bogus site ... or an intermediate site that then impersonated the real site.

the infrastructure issue was that the merchants decided that SSL was too high an overhead and stopped using SSL for the main connection where the client supplied the URL. they allowed the client supplied URL connection to be done w/o a URL SSL. then later ... the website provided a click button for checkout/pay ... which supplied a URL and then they also supplied a certificate that matches the URL that they provided.

this situation could either be a completely bogus site ... or even a mitm-attack ... which just did a pure passthru of all traffic going in each way .... except for the pay/checkout button. for the pay/checkout button, the mitm substituted their own URL & certificate. everything else passes thru as usual ... except the mitm is having two ssl session ... the mitm to "real" server session and the mitm to the client session. the mitm to "real" server uses the "real" server's certificate ... the mitm to client server users the mitm certificate. since the mitm supplied the URL to the client as part of the click operation ... the mitm can control that the actual URL invoked by the client matches the certitificate used by the mitm. the e-commerce use for pay/checkout scenario is one of the major uses for SSL on the internet today ... and the way that the infastructure has come to use SSL no longer prevents the mitm-attack with the attacker can supply both the URL and the certificate.

the issue for preventing mitm-attacks ... you need the client to supply the URL and have the SSL process validate the other end of that connection (with a server provided ssl domain name certificate ... or at least a trusted, supplied public key associated with the URL). when the attacker provides both the URL and a trusted public key ... what is being prevented.

there is another problem, somewhat the weak binding between domain name and domain name owner. the issue is that many of the certification authorities aren't the authoritative agency for the information they are certifying. much of the original justification for SSL related to mitm attacks was various integrity issues in the domain name infrastructure.

the process tends to be that a domain name owner registers some amount of identification information for their domain name ownership with the domain name infrastructure. the certification authorities then require that SSL domain name certificate applicants also provide some amount of identification information. then the certification authorities attempt to do the expensive, time-consuming, and error-prone process of matching the supplied identification information for the SSL domain name certificate with the identificaiton information on file with the domain name infrastructure for the domain name.

as part of various integrity issues related to that process, there has been a proposal, somewhat backed by the ssl domain name certification authority industry that domain name owners also register a public key with the domain name infrastructure (in addition to identificaiton information). then future communcation can be digitally signed and verified with the onfile public key. also the ssl domain name certification authority industry can require that ssl domain name certificate applications be digitally signed. then the certification authority can replace the expensive, time-consuming, and error-prone identification matching process with a much less-expensive and efficient authentication process by doing a real-time retrieval of the on-file publickey from the domain name infrastructure for verifying the digital signature (in lieu of doing a real-time retrieval of the on-file identificaiton information for the expensive, time-consuming and error-prone identification matching).

the two catch-22 issues here are

1) improving the overall integrity issues of the domain name infrastructure lessons the original justification for ssl domain name certificates

2) if the certification authority industry can rely on real-time retrieval of publickeys from the domain name infrastructure as the base, TRUST ROOT for all of their operations ... it is possible that other people in the world might also be able to do real-time retrieval of publickeys as a substitute to relying on SSL domain name certificates

misc, numerous past postings mentioning SSL and ssl domain name certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Mon, 26 Dec 2005 12:27:40 -0700
To: Ben Laurie <ben@xxxxxxxx>
CC: James A. Donald <jamesd@xxxxxxxx>,  cryptography@xxxxxxxx
Ben Laurie wrote:
Eh? It surely does stop MitM attacks - the problem is that there's little value in doing so for various reasons, such as no strong binding between domain name and owner, UI that doesn't make it clear which domain you are going to, or homograph attacks.

part II;

re:
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies

i've repeatedly asserted that the fundamental, underlying certificate business practices is to address the first time communication between complete strangers ... analogous to the letters of credit/introduction from the sailing ship days.

so the original SSL design point was to cross-check the domain name from the URL typed in by the client to the certificate supplied by the server. that basic premise is underminned when the server supplies the URL and the certificate.

so you are left with placing the burdon on the user to cross-check the URL displayed with the URL they think they are going to. it is simple human dynamics ... after the first several thousand displayed URLs ... they are going to ignore the process.

this is somewhat akin to the shared-secret passwords ... that the security experts define that the user has to have hard-to-guess, impossible-to-remember passwords that changed every 30 days, can never be written down and every new password has to be unique ... as well as unique across all security domains. the problem is that the number of unique security domains that a person deals with has grown from 1-2 (I had my first logon password in the 60s followed with the addition of an ATM pin in the late 70s) to scores ... there is no practical possibility that all such requirements can be satisified. misc. past collected posts on shared-secret
https://www.garlic.com/~lynn/subintegrity.html#secrets

the underlying infrastructure further complicated the whole process when a large percentage of the merchants outsourced the payment process to 3rd party ... where the click button supplied a URL of the 3rd party payment processor that had absolutely no relationship to the merchant site the client had been shopping at. this not only creates the situation where

1) any initial connection to a merchant site where the user might possibly have typed in the URL (or controls the URL generation via other mechanisms) is not checked ... and any actual checking for things like MITM-attacks doesn't occur until there is a URL provided by a potentially suspect site.

but also

2) conditions the user as normal process that the pay/checkout button may have a complete different domain name URL than the domain name of the shopping site.

so, pretty well documented human factors ... especially related to the design of security systems ... is that you don't tie humans making determination about soem security issue to something that repeatedly happens thousands and thousands of times. there are some guards that have to check badges against faces ... but they tend to have intensive training AND organizations that have high volume have gone to guards doing it only short periods and rotating ... and/or the guards are looking for a very simple repeating pattern and are trained to look for missing pattern). having the human have to repeatedly check a (URL) field that changes several thousand times a day against something they are suppose to expect ... is pretty quickly a useless security design.

a more sensible human factors design ... is to remember whether a person has checked out first time communication with a stranger ... the real first time, have the person do something additional ... and from then on remember that checking. in that respect ... creating a dependency on the user to repeatedly check a field that changes possibly thousands of times per day is extremely poor human factors security design.

now, the other part of my constant theme about certificates having design point of first time communication between complete strangers ... involves the additional constraing that the relying party has no other recourse to obtain information about the other party. if you go to paradigm where the relying party has facilities to remember first time checking ... then the appended certificate on the communication is actually only useful for the real first-time-communication (since by definition the relying party has facilities to remember previous checking ... like saving away the other parties publickey in a trusted public key repository).

another part is that if you have the relying party do some additional checking on the real first time interaction (rather than expecting the user to do increasingly trivial checking on each new URL) ... and the user is really online ... then that first time checking can involve real-time check of online resources .... again invalidating more of the underlying design point of appending a certificates on every communciation for benefit of relying parties who have no other recourse for determining information about complete stranger in first time communication.

there is something of a dichotomy here ... where there is a somewhat justification for certificates, based on the explanation that it could be too onerous for end-users having to do anything unusual for first-time communication with complete stranger (and therefor there is no dependency on local repository for remembering such additional checking and/or infrastructure that might be able to allow for the user to do real-time, online checking) ... but at the same time there is sometimes stated that there is a dependency on the user checking thousands and thousands of changing URLs every day to make sure they are what the user expected them to be (which is pretty much been a long recognized poor security design point).

again, collected past posts on ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

as to the other point about their being a week binding between URL domain name and owner; there are recognized integrity weaknesses in the domain name infrastructure (including the binding between the domain name and the domain name owner), however as previously stated, the SSL domain name certification authority certification process is dependent on the integrity of that information (as the TRUST ROOT basis for performing the certication, that in turn, is represented by a stale, static certiciate). i would claim, in the minds of end-users, there is an icnreasingly growing weak binding between the parties that consumers have to deal with on the internet and any domain name. furthermore creating a security foundation based on end-users having to mentally correlate URL domain names in a field (something that is constantly changing thousands of times per day) with external entities is a long recognized poor security design point.

X.509 / PKI, PGP, and IBE Secure Email Technologies

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Re: X.509 / PKI, PGP, and IBE Secure Email Technologies
Date: Tue, 27 Dec 2005 10:06:36 -0700
To: Ben Laurie <ben@xxxxxxxx>
CC: James A. Donald <jamesd@xxxxxxxx>,  cryptography@xxxxxxxx
Ben Laurie wrote:
This is the SSH design for host keys, of course, and also the petnames design for URLs. Unfortunately petnames don't solve the problem that it is hard to check the URL even the first time.

the original SSL paradigm was predicated on end-to-end security that "the server the user thot they were taling to" was "the server that they were actually talking to". certificates addressed the part from "the URL inside the browser" to "the server".

the paradigm was dependent on the user having a tight binding between "the server the user thot they were talking to" and "the URL inside the browser" ... which in turn was dependent on the user actually inputing the URL (as demonstration of the binding between the server the user thot they were talking to and the inputed URL).

the problem was that as the infrastructure matured ... the actual URL came to have less & less meaning to the user. so the MITM-attacks moved to the weak points in the chain ... rather than attacking a valid certificate and/or the process after the URL was inside the browser, attack the process before the URL got inside the browser.

petnames would seem to suffer somewhat the same problem as shared-secrets and passwords ... requiring a unique petname for every URL. it works as long as there are a few ... when they reach scores ... the user no longer can manage.

so part of the problem is that the URL has become almost some internal infrastructure representation... almost on par with the ip-address ... the user pays nearly as much attention to the URL for a website as they pay to the lower-level ip-address for the site (legacy requirements still have ways for people to deal with both the URL and the ip-address ... but they don't have a lot of meaning for a lot of people).

however the URL Is one way of internally doing bookkeeping about a site.

so security issues might be
  1. is the user talking to the server they think they are talking
  2. does the user believe that the site is safe
  3. is the site safe for providing certain kinds of sensitive information
  4. is the site safe for providing specific sensitive information
#1 is the original SSL design point ... but the infrastructure has resulted in creating a disconnect for establishing this information.

possibly another approach is that the local environment remembers things ... akin to PGP key repository. rather than the SSL locked ... have a large green/yellow/red indicator. red is neither SSL locked and/or checked. yellow is both SSL locked and checked. green is SSL loaked, initial checked, and further checked for entry of sensitive information.

a human factors issue is how easy can you make preliminary checking ... and then not have to do it again ... where the current infrastructure requires users to match something meaningful to URL and thenb to SSL certificate on every interaction. preliminary checking is more effort than the current stuff done on every SSL URL ... but could be made to be done relatively rarely and as part of an overall infrastructure that directly relates to something the end-user might find meaningful.

bits and pieces of the infrastructure are already there. for instance there is already support for automatically entering userid/password on specific web forms. using bits and pieces of that repository could provide ability to flag a specific web form as approved/not-approved for specific sensitive information (like specific userid/password).

the issue isn't that a simple indicator with 2-4 states isn't useful ... but the states presented need to realistic meaningful to the user. the locked/unlocked just says that the link is encrypted. it doesn't indicate that the remote site is the server that the user thinks it is ... in part because of the way that the infrastructure has created disconnect between the URL and what users actually deal in.

if the browser kept track of whether the user actually hit the keys for the entering of the URL ... then it might be useful for the browser to provide a higher level of confidence to the SSL certificate checking (aka it is only if the user actually typed in the URL ... can there be a high-level of confidence related to the SSL certificate checking).

one might be tempted to make some grandiose philosophical security statement ... that unless the user is involved in actually doing some physical operation (at least at some point in time) to correlate between what is meaningful to the user and the internal infrastructure. the original SSL scheme was dependent on the user actually typing in the URL.

this is somewhat analogous to the confusion that seems to have cropped up in the past with respect to the difference between digital signature and human signature.
https://www.garlic.com/~lynn/subpubkey.html#signature

x9.59
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

could actually have digital signature applied to a retail transaction at point-of-sale as means of authentication. however, that digital signature wouldn't be the representation of human intent, aka read, understood, agress, approves, and/or authorizes. pin-debit POS currently has two-factor authentication, you swipe the magnetic card and you enter a PIN. however, both are purely authentication. to get human intent, the (certified) POS terminal asks the person to push the yes button if they agree with the transaction. in the case of an x9.59 transaction at a point-of-sale, the digital signature is authentication, but NOT human intent. pushing the green/yes button on the POS terminal is what indicates human intent (and therefor is the equivalent of human signature indicating read, understood, approves, agrees, and/or authorizes).

Phishers now targetting SSL

From: Anne & Lynn Wheeler <lynn@xxxxxxxx>
Subject: Phishers now targetting SSL
Date: Fri, 30 Dec 2005 10:12:18 -0700
To: cryptography@xxxxxxxx

http://www.techworld.com/news/index.cfm?RSS&NewsID=5069
The spoofing has taken a number of forms, which appear to be becoming highly sophisticated. The vary from exploiting browser flaws, to hacking legitimate sites or even just frames on these sites, as a way of presenting what appears to be a legitimate banking site to visitors.

... snip ...

part of recent thread discussion some of the issues
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#40 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#41 X.509 / PKI, PGP, and IBE Secure Email Technologies

misc. other phishing references in the past week
http://www.arabnews.com/?page=11§ion=0&article=75335&d=27&m=12&y=2005 Phishers Finally Cast a Long Line to Banking Customers
http://www.newsfactor.com/story.xhtml?story_id=13000027JSBI Security Flaw in Browsers Poses Phishing Threat
http://www.physorg.com/news9424.html Cloudmark warns of increased 'phishing'
http://www.centralvalleybusinesstimes.com/stories/001/?ID=1046California cracks down on phishing
http://www.techweb.com/wire/security/181502468


AADS Postings and Posting Index,
next, previous - home