List of Archived Posts

2004 Newsgroup Postings (06/27 - 08/06)

Adventure game (was:PL/? History (was Hercules))
Adventure game (was:PL/? History (was Hercules))
Adventure game (was:PL/? History (was Hercules))
Bob Bemer, Computer Pioneer,Father of ASCII, Inventor of the Esc worker at IBM, Univac and Honeywell dies
Adventure game (was:PL/? History (was Hercules))
I am an ageing techy, expert on everything. Let me explain the Middle East to you
The One True Language
CCD technology
CCD technology
CCD technology
Possibly stupid question for you IBM mainframers... :-)
Mainframes (etc.)
ECC book reference, please
Two-factor Authentication Options?
Two-factor Authentication Options?
Possibly stupid question for you IBM mainframers... :-)
Page coloring required?
Google loves "e"
Low Bar for High School Students Threatens Tech Sector
fast check for binary zeroes in memory
Vintage computers are better than modern crap !
Basics of key authentication
Vintage computers are better than modern crap !
Basics of key authentication
Low Bar for High School Students Threatens Tech Sector
Why are programs so large?
Vintage computers are better than modern crap !
Vintage computers are better than modern crap !
Convince me that SSL certificates are not a big scam
BLKSIZE question
ECC Encryption
Usage of Hex Dump
Basics of key authentication
Vintage computers are better than modern crap !
Which Monitor Would You Pick??????
fc2, evolution, yum, libbonobo-2.6.2-1.i386.rpm
Vintage computers are better than modern crap !
Basics of key authentication
build-robots-which-can-automate-testing dept
SEC Tests Technology to Speed Accounting Analysis
Which Monitor Would You Pick??????
Interesting read about upcoming K9 processors
Interesting read about upcoming K9 processors
Hard disk architecture: are outer cylinders still faster than inner cylinders?
fc2, ssh client/server, kernel 494
what vector systems are really faster at
self correcting systems
very basic quextions: public key encryption
New Method for Authenticated Public Key Exchange without Digital Certificates
Univac 9200, 9300: the 360 clone I never heard of!
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates
New Method for Authenticated Public Key Exchange without Digital Certificates

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sun, 27 Jun 2004 12:00:56 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
jmfbahciv@aol.com wrote in message news:<40debed4$0$3077$61fed72c@news.rcn.com>...
IIRC, a lot of ours was the memory getting released was calculated incorrectly, usually off by one. Another common bug was blowing the linked address chain fixup. JRSTing using a indirect pointer. Having a hardware PDL with all the appropriate instructions must have helped a lot.

for the non-subpool allocation/deallocation ... dangling processes and pointers ... after storage had been deallocated and put back on the available chain ... resulted in

1) dangling process using a dangling pointer to updating a field that is now being used as the next available storage linkage field

2) dangling process using a dangling pointer for picking up a field as an address ... but that field has been cleared to zeros ... so it is an attempt to do something at or around address zero ... somewhere in the 3090 time-frame the psa storage protection was introduced .... turning it on and the first couple hundred bytes of the address space could not be modified ... low address protection:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.4.4?SHELF=EZ2HW125&DT=19970613131822

3) dangling process using a dangling pointer to address a field where the storage had been deallocated and then re-allocated as part of a smaller storage area. this tended to be much more of a problem before subpool storage allocation logic was intorudced. in the subpool storage allocation, storage areas tended to be re-used for the same sized storage allocation ... so dangling pointers addressing past the end of an allocated storage area was much less common. There were some cases where logic was just plain wrong ... re-using a pointer variable w/o reloading its value .... it was much more common to have a dangling pointer (as part of some dangling process) accessing storage after it had nominally been de-allocated (and possibly even re-allocated to some other process).

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sun, 27 Jun 2004 20:12:08 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
Larry__Weiss wrote in message news:<40DF1945.90B35C1F@airmail.net>...
Reminds me of "Do The Right Thing"

http://c2.com/cgi/wiki?DoTheRightThing
http://c2.com/cgi/wiki?DoesWorseIsBetterRequireOpenSource


or simply do right

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sun, 27 Jun 2004 20:20:32 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
jmfbahciv@aol.com wrote in message news:<40debed4$0$3077$61fed72c@news.rcn.com>...
I don't believe that...but then that's my paranoia talking. With computers, all obfuscation implies very large worm-filled cans.

one might claim that is part of the issue of implicit lengths in C language string-handling libraries. when we were starting ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
we did some detailed vulnerability analysis ... one was that the implicit lengths in conventional C would contribute to 10-fold to 100-fold increase in buffer length related problems (compared to experience we had in other environments). lots of vulnerability/exploit/fraud posts:
https://www.garlic.com/~lynn/subintegrity.html#fraud

specific reference over ten years later (of course this was before a lot of the scripting and phishing stuff that since has started to raise its ugly head)
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug

and whole list of specific posts that touch on buffer overflow
https://www.garlic.com/~lynn/aadsm10.htm#cfppki13 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#hackhome Hackers Targeting Home Computers
https://www.garlic.com/~lynn/aadsm10.htm#risks credit card & gift card fraud (from today's comp.risks)
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm13.htm#37 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm14.htm#34 virus attack on banks (was attack on paypal)
https://www.garlic.com/~lynn/aadsm14.htm#38 An attack on paypal (trivia addenda)
https://www.garlic.com/~lynn/aadsm16.htm#1 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel needed
https://www.garlic.com/~lynn/aepay10.htm#6 credit card & gift card fraud (from today's comp.risks)
https://www.garlic.com/~lynn/aepay11.htm#65 E-merchants Turn Fraud-busters (somewhat related)
https://www.garlic.com/~lynn/aepay11.htm#66 Confusing Authentication and Identiification?
https://www.garlic.com/~lynn/2000.html#25 Computer of the century
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000b.html#17 ooh, a real flamewar :)
https://www.garlic.com/~lynn/2000b.html#22 ooh, a real flamewar :)
https://www.garlic.com/~lynn/2000c.html#40 Domainatrix - the final word
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001b.html#47 what is interrupt mask register?
https://www.garlic.com/~lynn/2001b.html#58 Checkpoint better than PIX or vice versa???
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001d.html#58 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
https://www.garlic.com/~lynn/2001i.html#54 Computer security: The Future
https://www.garlic.com/~lynn/2001k.html#43 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2001l.html#49 Virus propagation risks
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#72 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#76 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002.html#19 Buffer overflow
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#25 ICMP Time Exceeded
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#27 Buffer overflow
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#29 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#33 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2002.html#35 Buffer overflow
https://www.garlic.com/~lynn/2002.html#37 Buffer overflow
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002b.html#34 Does it support "Journaling"?
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#62 TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
https://www.garlic.com/~lynn/2002c.html#15 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#16 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#58 O'Reilly C Book
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures
https://www.garlic.com/~lynn/2002e.html#73 Blade architectures
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#41 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#68 Are you really who you say you are?
https://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002h.html#74 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#62 subjective Q. - what's the most secure OS?
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002l.html#13 notwork
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002m.html#8 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#10 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#20 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#58 The next big things that weren't
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2002n.html#25 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002o.html#41 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2003.html#37 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003b.html#54 Microsoft worm affecting Automatic Teller Machines
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#52 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003e.html#17 unix
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#3 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2003h.html#41 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003h.html#47 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003h.html#56 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone
https://www.garlic.com/~lynn/2003i.html#59 grey-haired assembler programmers (Ritchie's C)
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003j.html#4 A Dark Day
https://www.garlic.com/~lynn/2003j.html#8 A Dark Day
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003j.html#20 A Dark Day
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003k.html#64 C & reliability: Was "The Incredible Shrinking Legacy"
https://www.garlic.com/~lynn/2003l.html#2 S/360 Engineering Changes
https://www.garlic.com/~lynn/2003l.html#9 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#36 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003m.html#54 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003n.html#14 Poor people's OS?
https://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
https://www.garlic.com/~lynn/2003o.html#5 perfomance vs. key size
https://www.garlic.com/~lynn/2003o.html#6 perfomance vs. key size
https://www.garlic.com/~lynn/2003o.html#20 IS CP/M an OS?
https://www.garlic.com/~lynn/2003o.html#25 Any experience with "The Last One"?
https://www.garlic.com/~lynn/2003o.html#50 Pub/priv key security
https://www.garlic.com/~lynn/2003o.html#55 History of Computer Network Industry
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#13 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2003p.html#15 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#30 Threat of running a web server?
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#10 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
https://www.garlic.com/~lynn/2004f.html#20 Why does Windows allow Worms?
https://www.garlic.com/~lynn/2004g.html#8 network history

Bob Bemer, Computer Pioneer,Father of ASCII, Inventor of the Esc worker at IBM, Univac and Honeywell dies

From: lynn@garlic.com
Date: Mon, 28 Jun 2004 12:45:18 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Bob Bemer, Computer Pioneer,Father of ASCII, Inventor of the Esc worker at IBM, Univac and Honeywell dies...
"ed sharpe" wrote in message news:<n5tDc.796$AL2.45941@news.uswest.net>...
j.. He is recognized as the first person in the world to publish warnings of the Year 2000 problem -- first in 1971, and again in 1979.

k.. And..... more! go to his site to learn more.....


there was a thread in the early 80s about various date related problems, including discussion of some feb. 29 and end-of-decade problems encountered by ACP/PARS from the late 60s (end of decade problem is similar to the end-of-century problem)

reposting of something from 1984 thread:
https://www.garlic.com/~lynn/99.html#24 BA solves Y2K

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Mon, 28 Jun 2004 12:49:25 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
re:
https://www.garlic.com/~lynn/2004h.html#2 Adventure game ...

somewhat related: Security bug? My programming language made me do it!
http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=160

I am an ageing techy, expert on everything. Let me explain the Middle East to you

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am an ageing techy, expert on everything. Let me explain the Middle East to you.
Newsgroups: alt.folklore.computers
Date: Wed, 07 Jul 2004 19:13:28 -0600
"Jack Peacock" writes:
Actually I do appreciate that the UK has been spared the worst, like mandatory vacation or the byzantine french labor laws. I credit Thatcher for this, the turning point being the moment Scargill caved in. Blair gets an honorable mention for skillfully stripping the TUC of much of it's influence over the Labor party and turning it into Tory Lite.

sligthly related tale
https://www.garlic.com/~lynn/2004c.html#18 IT jobs move to India

The One True Language

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The One True Language
Newsgroups: comp.arch
Date: Thu, 08 Jul 2004 14:09:28 -0600
J Ahlstrom writes:
Or as one of the people associated with IBM's Fort Knox project said (paraphrased) Since it was supposed to solve an IBM problem rather than a customer problem, it was doomed.

possibly the same could be said of future system project misc. future system posts:
https://www.garlic.com/~lynn/submain.html#futuresys
some specific
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts

which had as one of its main driving factors clone controllers misc pcm posts:
https://www.garlic.com/~lynn/submain.html#360pcm

random 801, fort knox, etc. posts:
https://www.garlic.com/~lynn/subtopic.html#801

a few specific:
https://www.garlic.com/~lynn/2003.html#2 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#3 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#5 Card Columns
https://www.garlic.com/~lynn/2003c.html#7 what is the difference between ALU & FPU
https://www.garlic.com/~lynn/2003d.html#43 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003e.html#55 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)

CCD technology

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: CCD technology
Newsgroups: alt.folklore.computers
Date: Sun, 11 Jul 2004 09:55:25 -0600
and for something a little different, from long ago and far away, a fun project that I got to work on

Date: 08/04/83 22:27:33
From: wheeler

I was contacted by branch office about something to do with professor at Santa Cruz selecting computers for 400in telescope to be placed in Hawaii (he was interested in getting a meeting specifically to talk about shipping data between Hawaii and Santa Cruz ... hyperchannel and satellites).


... snip ... top of post, old email index, NSFNET email

Date: 11/21/84 08:55:26
From: wheeler

re: hsdt; oh, almost forgot. I'm having another meeting with the Berkeley ten meter telescope people (this time just the IBMers working with them). They want to set-up for remote observing (observatory will be about 14,000 foot level in Hawaii) from both "local" sea level and eventually the mainland. Current estimates are that the digitized image represents about 800kbits/sec of data during the evening hours (data flow is asymetrical with telescope control commands going in the opposite direction only about 1200 baud).


... snip ... top of post, old email index, NSFNET email

One of the astronomers was at Santa Cruz, the engineering was done by department at LBL and got to visit several times as the component engineering was being developed.
http://www.lbl.gov/Science-Articles/Archive/keck-telescope.html

One of the remote viewing issues was altitude sickness issues at the telescope level. The original Keck grant was for $70m-$85m(?). The transition was to be to electronic (from film), this was in the days of small CCDs and the project started off doing tests with 200x200 (40k) CCD arrays; a far cry from today's 5megapixel cameras. There were rumors of gov. projects with 2kx2k CCD arrays (4megapixel) and Spielburg funding a 4kx4k CCD array (16megapixel) project (for movies). other references:
http://www2.keck.hawaii.edu/geninfo/about.html
http://www.keckobservatory.org/
http://antwrp.gsfc.nasa.gov/apod/ap971227.html
http://www.ps.uci.edu/physics/news/chanan.html
http://scikits.com/KFacts.html
http://www2.jpl.nasa.gov/sl9/keck.html

lots of general hsdt (high speed data transport) references:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

old reference to the 10m project (in a thread about accepting NSF funding for projects and loosing control):
https://www.garlic.com/~lynn/2000d.html#19

CCD technology

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CCD technology
Newsgroups: alt.folklore.computers
Date: Mon, 12 Jul 2004 11:20:43 -0600
only tangently related to the original posting:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history

other misc. from the archives:

Date: 08/22/83 14:47:21
From: wheeler

i've been invited to go up to Lick observatory next Tuesday at 1pm to discuss technical details of the 10meter observatory being planned for Hawaii.

They are planning on doing image processing ... figuring 8.6*10**9 bits per evening. There will be micros controlling the 36 mirrors and some big crunchers to handle the data.


... snip ... top of post, old email index

Date: 08/30/83 18:02:51
From: wheeler

went by Lick observatory ... basically it was to see how observatories currently operate as background for subsequent discussion on the 10 meter proposal. Lick appears to be somewhat primitive on the scale of being computerized ... although I can't judge if that is just the current state of the art in that area.

University observatories in general appear to be very strapped for funds. They are just in the process of installing an LSI/11 as an upgrade to two PDP8s. A lot of stuff is done with dedicated (cheap) microprocessors (in many case they put together and maintain themselves). Even trivial things to computerize things area gated by financial considerations.

10 meter telescope hopefully will be better funded in that area but they are talking about it being 3-5 years out. What is available in that timeframe may drastically change ... especially in price at the low end.


... snip ... top of post, old email index, NSFNET email

Date: 05/19/86 10:43:19
From: wheeler

i just got a note from TIW (parent company of multipoint) ... they say they've been awarded mechanical design contract for the Berkeley 10 meter telescope .. now called 10 meter W.M. Keck telescope for the California Association for Research in Astronomy (CARA). Multipoint is company making our 2nd set of satellite gear.


... snip ... top of post, old email index, NSFNET email

random note: This was an interesting situation ... TIW (toronto iron works) was a company that got into doing lots of big satellite dishes ... as well as some of the deep space probe dishes. apparently because they were doing so much satellite mechanical stuff ... they formed a startup that did satellite earth station electronics. it was otherwise somewhat strange to have an iron works company with a high-tech electronics spin-off. multipoint was one of the companies hired to produce tdma earth stations to our specs.

previous posting:
https://www.garlic.com/~lynn/2004h.html#7

CCD technology

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CCD technology
Newsgroups: alt.folklore.computers
Date: Mon, 12 Jul 2004 18:49:19 -0600
misc. keck &/or CCD urls that i stumbled acrossed:
http://www.spaceref.ca/news/viewpr.html?pid=10321
http://www2.keck.hawaii.edu/news/hires.html
http://www2.keck.hawaii.edu/inst/
http://www.chem.arizona.edu/icsoi/pages/2003_presentations.htm
http://cadcwww.dao.nrc.ca/ADASS/adass_proc/adass3/papers/cohenj/cohenj.html
http://cadcwww.dao.nrc.ca/ADASS/adass_proc/adass3/papers/luptonw/luptonw.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Possibly stupid question for you IBM mainframers... :-)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Possibly stupid question for you IBM mainframers...  :-)
Newsgroups: comp.lang.cobol,alt.folklore.computers
Date: Thu, 15 Jul 2004 08:37:16 -0600
spinoza1111@yahoo.com (Edward G. Nilges) writes:
Be sure to learn Rexx, which is a language developed at IBM UK in the early 1980s by Mike Cowlishaw. Originally intended as a way to write procedures on Conversational Monitor System, REXX is now also used on MVS and Time Squandering Option (TSO).

some old rexx threads:
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#30 20th March 2000
https://www.garlic.com/~lynn/2000b.html#31 20th March 2000
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2000b.html#33 20th March 2000
https://www.garlic.com/~lynn/2002g.html#57 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#58 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#59 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#60 Amiga Rexx
https://www.garlic.com/~lynn/2004d.html#17 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#19 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#20 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#21 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#26 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#41 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframes (etc.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframes (etc.)
Newsgroups: bit.listserv.ibm-main
Date: Thu, 15 Jul 2004 10:57:56 -0600
tedmacneil@bell.blackberry.net (tedmacneil) writes:
PCB: Printed Circuit Board Programme Control Block

MAC: message authentication code media access control

ECC: error correcting code elliptical curve cryptography

... note at least for reed-solomon ECC ... both ECCs involve galois fields.

minor past refs:
https://www.garlic.com/~lynn/2002e.html#53 Mainframers: Take back the light (spotlight, that is)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ECC book reference, please

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECC book reference, please
Newsgroups: sci.crypt
Date: Thu, 15 Jul 2004 11:03:00 -0600
Michael Amling writes:
The ECC Tutorial at www.certicom.com gives you the gist. I think the NIST has suggested ECC curves, probably in whichever FIPS covers ECDSA, although I don't have a URL at hand.

FIPS186-2 ecdsa also sites X9.62 ...
http://csrc.nist.gov/cryptval/dss.htm

one of the appendixes list "approved" curves. note that there is not yet conformance testing specification for ecdsa.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Two-factor Authentication Options?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two-factor Authentication Options?
Newsgroups: comp.protocols.kerberos
Date: Thu, 15 Jul 2004 15:00:40 -0600
hotz@jpl.nasa.gov ("Henry B. Hotz") writes:
In the long run the Kerberos password is a problem because the human brain does not obey Moore's law. As I see it the solution is to use some form of two-factor authentication for the initial ticket exchange.

So what options are there in that space?

AFAIK none --- with the standard open source servers. There are patches available for MIT to support CRYPTOcard and SecureID. There are patches available for Heimdal to support X509 certificates (PKINIT).

Anything else out there?


original pkinit specification for kerberos had certificate-less public keys (aka w/o certificates) .... certificate option was added later.

certificate-less public keys ... basically registers public keys in lieu of passwords .... and does digital signature verification using the registered public keys from the online registry.

the nominal problem with shared-secret passwords is that you need a unique shared-secret for every unique security domain. when a person was only involved in one security domain authentication scenario, it wasn't too bad .... but as the number of different security domains grew, people were finding that they needed scores of unique passwords.

the simple public key scenario is you encapsulate the private key in a token, register the (single) public key in lieu of password, and use the token for performing a digital signature. the registered public key is used to authenticate the digital signature.

the requirement for needing a unique password for every unique security domain was based on just learning the password (in one domain) was sufficient for impersonation in a different security domain (i.e. an ISP garage-operation password shouldn't be the same as your online banking password). public key doesn't suffer from this vulnerability since knowing somebody's public key isn't sufficient for impersonation.

the hardware token, by itself provides one-factor, something you have authentication .... from three-factor authentication
something you have
something you know
something you are


in theory, a single "digital signature" hardware token could be used across a multitude of different security domains .... since knowing the associated public key isn't sufficient to impersonate.

it is also possible to have certified hardware tokens that only work in the approved manner when the appropriate pin has been supplied. The result can be two-factor authentication .... aka
something you have
something you know


w/o the PIN being a shared-secret ... and therefor not subject to requiring an unique value for every security domain. This is done by providing the "secret" (NOT shared-secret) PIN to the hardware token. The security domain doesn't need to know the person's PIN ... they just need to have certified that the hardware token only works in the approved manner when the correct PIN has been entered. Then based on having certified the hardware token (to require correct pin for operation), then verification of the digital signature implies two-factor authentication:
person has the hardware token
person entered the correct pin


it is also possible to get hardware tokens which require something like a fingerprint to work correctly (in lieu of a PIN) ... in which case it is still two-factor authentication ... but
person has the hardware token
person has the correct fingerprint


X.509 identity certificates were somewhat the rage in the early 90s .... however, it was discovered that they represented a whole bunch of privacy and liability issues. Some number of x.509 certificates were used in a truncated manner in the mid-90s ... which effectively only contained an account number and a public key ... and were referred to as relying-party-only certificates:
https://www.garlic.com/~lynn/subpubkey.html#rpo

however, it was possible to show that such certificates were redundant and superfluous for normal online environments:

1) key owner registers their public key with online infrastructure 2) online infrastructure stores the public key in database 3) online infrastructure sends a RPO-certificate back to the key owner 4) key owner authenticates something by doing a digital signature 5) key onwer sends the digital signature and certificate back to the online infrastructure 6) online infrastructure pulls public key from online database 7) online infrastructure verifies digital signature with online public key

the certificate typically contains a stale, static subset of some online information.

certificates were originally designed to provide some level of assurance in an offline environment where the relying party had no recourse to the real online registered information.

when the relying party has access to the real, online, timely registered information ... then the stale, static certificate subset is redundant and superfluous.

a side-note about the something you have hardware token; there is some tendancy that every unique security domain wants to issue its own "certified" hardware token. this has some human factor issues in much the same way that trying to manage scores, possibly a hundred different passwords breaks down in practical application. A person is as likely going to manage (well) a hundred unique, different hardware tokens as they are going to manage (well) a hundred unique, different passwords.

a pending issue is how could a person get away with one or possibly extremely few different, unique hardware tokens .... and avoid getting into the same proliferation bind that they now have with passwords.

misc. other postings on the subject:
https://www.garlic.com/~lynn/aadsm17.htm#0 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)<
https://www.garlic.com/~lynn/aadsm17.htm#1 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#2 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#3 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#4 Difference between TCPA-Hardware and a smart card (was: examp le: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#5 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#7 Phillips, Visa push contactless payments in consumer devices
https://www.garlic.com/~lynn/aadsm17.htm#9 Setting X.509 Policy Data in IE, IIS, Outlook
https://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#15 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#16 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#18 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#19 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#21 Identity (was PKI International Consortium)
https://www.garlic.com/~lynn/aadsm17.htm#22 secret hackers to aid war on internet fraud
https://www.garlic.com/~lynn/aadsm17.htm#23 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#26 privacy, authentication, identification, authorization
https://www.garlic.com/~lynn/aadsm17.htm#27 Re:Identity Firewall. l PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#34 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#36 Yahoo releases internet standard draft for using DNS as public key server
https://www.garlic.com/~lynn/aadsm17.htm#38 Study: ID theft usually an inside job
https://www.garlic.com/~lynn/aadsm17.htm#39 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#40 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#41 Yahoo releases internet standard draft for using DNS as public key server
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/aadsm17.htm#46 authentication and authorization (was: Question on the state of the security industry)
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda
https://www.garlic.com/~lynn/aadsm17.htm#50 authentication and authorization (was: Question on the state of the security industry)
https://www.garlic.com/~lynn/aadsm17.htm#51 authentication and authorization
https://www.garlic.com/~lynn/aadsm17.htm#53 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#54 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Two-factor Authentication Options?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two-factor Authentication Options?
Newsgroups: comp.protocols.kerberos
Date: Thu, 15 Jul 2004 15:34:35 -0600
slight addenda .....

hardware tokens .... unless there is biometric or pin/password required in addition to the hardware token (simply doing something) ... then it is only single-factor authentication ... aka
something you have

as opposed to the single-factor authentication
something you know

for two-factor authentication, it requires that at least two of the three are required for the authentication to be satisfied:
something you have
something you know
something you are


simple hardware token operation with nothing more, would still only be single-factor authentication.

x.509 certificates have nothing directly to do with the authentication methodology ... certificates simply supply the public key used to verify a digital signature (originally targeted for offline environments where the relying party has absolutely no access to any registration information).

the verification of the digital signature may be used to imply authentication .... say something you know and/or something you have .... but that is orthogonal to the mechanism that is used to provide the public key to the relying party for the digital signature verification.

if the relying party has certified that the private key is stored in an encrypted software file and that the appropriate decryption key is required to access the private key, then the verification of the digital signature (with a public key) can imply, single-factor, something you know authentication (the key owner supposedly supplied the decryption key for the software private key file).

if the relying party has certified that a unique private key is stored in a hardware token (and the private key can never be revealed), then the verification of a digital signature can imply single-factor, something you have authentication.

if the relying party has certified that a unique private key is stored in a hardware token and the token only will operate in an approved manner when the correct pin has been entered, then the verification of a digital signature can imply two-factor, something you have and something you know authentication.

x.509 certificates are a source of an appropriate public key for performing the digital signature verification ... but relying-party-only, stale, static, x.509 identity certificates can be shown to be redundant and superfluous in an environment where the relying-party has access to any registration.

regardless of whether the source of the public key is certificate based or certificate-less based, the verification of the digital signature still doesn't tell you what that verification means. The verification of the digital signature can imply a number of different authentication mechanisms depending on the environment that manages the private key and originates the digital signature.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Possibly stupid question for you IBM mainframers... :-)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Possibly stupid question for you IBM mainframers...  :-)
Newsgroups: comp.lang.cobol,alt.folklore.computers
Date: Thu, 15 Jul 2004 18:25:57 -0600
> The Bunch... Burroughs, Univac, Nixdorf, C????, Honeywell

a few old bunch/dwarf postings:
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#36 mainframe
https://www.garlic.com/~lynn/2003.html#71 Card Columns
https://www.garlic.com/~lynn/2003b.html#61 difference between itanium and alpha
https://www.garlic.com/~lynn/2003o.html#43 Computer folklore - forecasting Sputnik's orbit with
https://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary seven dwarfs: burroughs, control data, general electric, honeywell, ncr, rca, sperry-rand

BUNCH: burroughs, univac, ncr, control data, honeywell

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Page coloring required?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Page coloring required?
Newsgroups: comp.arch
Date: Fri, 16 Jul 2004 17:42:44 -0600
googlenews@peachfish.com (Zalman Stern) writes:
This covers the data cache aspect of aliasing. Figure 8 of the above paper illustrates the construction of the 52-bit virtual address. The use of an inverted page table restricts address aliasing at the virtual address translation level. The translation mechanism requires that each physical page have precisely one 52-bit virtual address. One must either use segment sharing or a mechanism which effectively faults a page from one alias to another at access time.

in the ibm mainframes, TLBs tagged different virtual address spaces by using the real address of the virtual address table. early machines had a stack of seven concurrent virtual address spaces .... there was a lookaside of the real address of the (virtual address) table and a three bit identifier. then every TLB entry contained a 3-bit tag .... providing TLB line associativity to a specific address space (identified by the real address of the associated table). when a new address space was introduced .... one of the existing entries were scavenged ... and all the TLB entries with that specific 3-bit tag were invalidated. as the technology progressed, the number of address spaces that the TLB tracked grew ... and so did the number of bits in the tag field. the total number of unique virtual address spaces (in a system) were somewhat limited by the physical space for tables (and unique real address table origin).

801 went to inverted tables .... so there was no longer a unique real address that could be associated with an address space. The machine was also somewhat defined for a different operating system paradigm ... where there wasn't the concept of unique address spaces .... there were just addressable virtual objects (all within a single operating domain). romp addressed this by defining a 12bit segment identifier ... and therefor allowed up to 4096 concurrent virtual objects. There was 32bit virtual addressing with the top four bits selecting one of 16 segment registers ... the segment registers contained 12bit segment id values. The lookup on the TLB then become the page number from the 28bit segment displacement address ... aka 12bits ... and the 12bits segment tag-id from the segment register (24bits total).

translating this into a unix-type virtual address space paradigm .... then was something along the line of pre-allocating 16 12bit tag numbers for each address space .... effectively translating 4096 virtual segment object paradigm into 256 32-bit virtual address space paradigm. however, there was some residual leftover from the original single virtual address space, 4096 virtual segment objects design point .... that sometimes the description came out as the 12bits tag (from the segment id for segment-associative TLB) being combined with the 28bits segment displacement .... resulting in a 40bit virtual address architecture.

when RIOS doubled the segment tag field size from 12bits to 24bits ... the residual description talking about a 40bit virtual address architecture became a 24+28=52bit virtual address architecture.

other architectures have TLB virtual adress space associative architectures ... where the TLB associativity is at the virtual address space level ... rather than the segment associative level. So say, there was an architecture that supported a 32bit virtual address space identifier .... where each virtual address space was 32bits ... then using the romp/rios logic ... the machine would be a 64bit virtual address space machine.

Mapping the segment paradigm into address space paradigm .... it is possible to share/conserve TLB entries because a unique shared segment could have its own unique segment identifier ... and therefor all TLB entries for pages in that segment would be the same, regardless of which address space was involved (aka they are segment-id associative rather than virtual address space-id associative ... and a "shared" segment could have the same segment-id regardless of the address space actually involved). If the sharing is restricted to segment level sharing in a segment-associative TLB ... then aliases won't exist like they might in a address space associative TLB ... aka the same virtual shared-segment page might appear at multiple places in the TLB because it has been taged by multiple different address space identifiers.

As a side note .... when the sharing unit and the TLB associative unit are the same .... TLB aliasing is effectively eliminated. In theory, the same shared segment can appear at different segment numbers in different virtual address spaces ... and there is still no alias problem (since the virtual address segment number isn't involved in the TLB indexing, it is strictly segment-id associative).

when the unit of associtivity (aka segment associtivity means that the TLB is indexed by unique segment IDs rather than unique address space IDs) is disjoint from the unit of sharing ... then it is possible to have alias entries in the TLB ... i.e. unique, different TLB entries referring to the same thing. Typical operation is that aliases are searched to see if they have matching real address values.

you don't need an inverted table architecture to have segment associative TLB. the original mainframe architecture provided for real-table-address based segment associativity ... but I know of no machines that implemented it. In that scheme ... rather than taging the TLB entries with some unique virtual address space identifier (either a bit-pattern or some unique real address) .... each segment had a unique real tables (and corresponding real table address). in a segment associative/indexed TLB .... shared segments don't result in duplicate/alias entries (modulo having the same shared segment appear at different virtual address locations in different virtual address spaces .... and the TLB page index uses bits from that portion of the virtual address).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google loves "e"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google loves "e"
Newsgroups: alt.folklore.computers
Date: Fri, 16 Jul 2004 16:52:06 -0600
"Charlie Gibbs" writes:
I saw a billboard locally that consisted of a string of hex digits which decoded to an ASCII message. I forgot what it was, though...

i once went to theater in downtown madrid ... and they were having a short film produced at univ. of madrid. a big part of the film was in what looked like an apartment ... but one wall was covered with a couple dozen tv screens all slaved to scrolling the same text at about 1200 baud. what was wierd was recognizing that they were constantly scrolling a vm/370 load map ... and even worse, i recognized the PLC i.e. PLC ... monthly maint. distributions, at the time ... there would have been well over 100 monthly PLCs ... although there would have been some tendency that the film used a more recent PLC than an earlier PLC. The load map didn't explicitly identify the PLC ... but i deduced it from what maint was listed and what maint wasn't listed.

long ago and far away, i use to be able to read the ebcdic holes in punch cards. if you were punching your own cards on 026/029 ... the keypunch printed the meaning of the punch holes on the top of the card (for at least the character codes). however, if the system punched a deck (say on 2540) ... there was just the holes and no print line across the top.

the process actually involved converting the punch holes to hex ... and then possibly for hex that had character representation ... converting the hex to character (and/or just converting the punch holes directly to character).

binary "txt" cards were the output of assembler or compiler that had a hex "02" in col-1 (12-2-9 punch) and the letters "txt" in cols 2-4. Then there was a program (displacement) address (in hex) and up to 56 bytes of contents starting at that location.
https://www.garlic.com/~lynn/2001.html#60

cards had 12 rows ... in theory allowing up to 12 holes in each of the 80 card columnes. ebcdic encoded only 256 values per column (one 8-bit byte) ... some of which had character representation.

pre-360s had allowed encoding two 6-bit bytes per column (column binary) ... possibility of 4096 punch hole combinations per column (lace cards with all holes punched in all columns tended to be somewhat fragile).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Low Bar for High School Students Threatens Tech Sector

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Low Bar for High School Students Threatens Tech Sector
Newsgroups: alt.folklore.computers
Date: Fri, 16 Jul 2004 19:49:04 -0600

http://itmanagement.earthweb.com/career/article.php/3382251

in 1994 there were similar types of stories being published based on information from the 1990 census. The 1994 quotes were that half of (all) 18 year olds were functionally illiterate.

ten years later, this article quotes the federal depart. of education that 7 out of 10 students graduate from high school w/o completing courses needed to succeed in the workplace (w/o commenting about those that don't even graduate).

past related postings in various threads:
https://www.garlic.com/~lynn/2002k.html#41 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2002k.html#45 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2003i.html#28 Offshore IT
https://www.garlic.com/~lynn/2003i.html#45 Offshore IT
https://www.garlic.com/~lynn/2003i.html#55 Offshore IT
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
https://www.garlic.com/~lynn/2004b.html#2 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004d.html#18 The SOB that helped IT jobs move to India is dead!

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

fast check for binary zeroes in memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: fast check for binary zeroes in memory
Newsgroups: comp.arch
Date: Tue, 20 Jul 2004 08:38:04 -0600
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Other than the straigthforward way, the only way that comes to mind is to check per page if the page was originally mapped to a zero-filled page (e.g., a MAP_ANON page private to the process), and has not yet been written to. This check will give a yes (page all-zero) relatively fast, but when it fails, you still have to check in the straightforward way. I don't know of a way for a user-space process to use this way.

... slight topic drift ... the original cp/67 that was installed at the university in jan, 1968 ... had a zeros page implementation ... where there was an actual 4k page of all zeros formated on the boot volume. in early '68, i changed that code to use store multiple of cleared registers in a bxle loop. i also changed to lazy allocation, don't allocate until it was necessary to actually write a page out. later, i implemented the dup/no-dup allocation strategy ... nominally leave a page allocated on disk (aka duplicate) when it is brought into memory ... which may save a write if the page was never changed during its residency in memory. under constrained circumstances for page disk space ... switch to a no-dup strategy i.e. deallocate disk space whenever page is fetched ... which then always requires a later write when a page is selected for replacement (trading off disk space against write operations) ... modulo a resident zeros page ... which if not changed ... it is just discarded ... since it can still be accurately recreated on the fly.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Fri, 23 Jul 2004 09:56:04 -0600
Rupert Pigott writes:
Joking aside... It seems to me that the POWER/PowerPC brigade within IBM are attempting to at least replicate the capabilities of the IBM mainframes. The mainframes are losing their unique architectural position, so perhaps we might see the death of the S/360 line within a decade...

slightly related ... my wife and I were doing the ha/cmp project starting 15 years ... to replicate availability qualities. During ha/cmp, we had coined the terms disaster survivability and geographic survivability. we had previously both spent significant time in mainframes. at one time she was responsible for loosely-coupled in POK.
https://www.garlic.com/~lynn/subtopic.html#hacmp
specific ref from 92:
https://www.garlic.com/~lynn/95.html#13

misc. on Peer-Coupled Shared Data
https://www.garlic.com/~lynn/submain.html#shareddata

at one point during the ha/cmp project, i was asked to help write a section in the corporate continuous availability strategy document ... however it got replaced because both Rochester and POK objected (in part, at the time, they couldn't meet the objectives). misc. past references:
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases

shortly after Amdahl had gotten funding (early 70s) he gave a talk at MIT ... including his reasoning that he used with money people for getting funding. basically it was that there was so much existing executable mainframe code (at the time, something like $200b) ... that corporations would find it simpler to just keep running it for at least the next 30 years (than rewrite it) ... even if IBM completely walked away from 360s and went to a completely different kind of machine.

somewhat behind the scenes was motivation for Amdahl leaving and starting his own company was ibm was in the throes of future systems to do just exactly that (future systems was more radical change than the 360 change had been). misc. fs
https://www.garlic.com/~lynn/submain.html#futuresys

one of the prime motivations for FS was the clone control units ... i.e. the plug compatible manufactures ... in some sense the PCM controllers gave rise to FS ... and FS, in turn, gave rise to the PCM processors. misc. references for getting blamed for helping create the pcm controller business
https://www.garlic.com/~lynn/submain.html#360pcm

misc past posts mentioning Amdahl & mit
https://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2003.html#36 mainframe
https://www.garlic.com/~lynn/2003e.html#20 unix
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
https://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Basics of key authentication

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Basics of key authentication
Newsgroups: comp.security.ssh
Date: Sun, 25 Jul 2004 02:46:30 -0600
"OpticTygre" writes:
Ok, so everything I've read basically tells me the client creates a public and private key. The public key gets copied to the server, and when the client wants to log in, the server encrypts some message with the public key, and the client decrypts it with its private key to prove he is who he says he is. Is that right so far?

Alright, if that's ok, then I have a few questions.

1. A server can have tons of public keys stored on it. How does he know which public key to encrypt the message with for the client?

2. In the process of public / private key authentication for logins, what is the order things are typically done? IE: a. client says "hey, I want to connect" b. client sends a message encrypted with private key c. server decrypts through list of public keys etc..... (I'm sure the above isn't right)

In other words, what's the step-by-step process used for authenticating via public/private keys between client and server? Thanks for helping to clear things up.


a radius scenario .... large percentage of ISPs around the world use radius as the standard mode of login by clients ... with userids and passwords. In the public key scenario ... the client registers a public key (in lieu of a password) and selects digital signature challenge/response authentication.

the client ppp connection code sends "login <their userid>" ... the server sends back some random challenge. the client combines the random challenge with some additional data ... and digitally signs it with their public key. the client returns the client contributed data and the digital signature to the server. the server takes the original random challenge, the client contributed data and uses the public key on file to validate the digital signature.

another is a kerberos scenario ... the dominant enterprise/campus authentication mechanism for windows and most open system platforms; again predominantly userid/password. the kerberos pk-init specification has a public key registered in lieu of a password and a digital signature challenge/response process used (process similar to the radius scenario).

part of the issue in the challenge/response authentication scenario ... is countermeasure against replay attacks ... where eversdropper records client's transmission and replays them at a later time as an impersonation attempt (i.e. the server always sends different challenge every time .... so the correct client responses would always be different & unique).

... basically, the client doesn't just say that they want to connect ... they client says that they want to connect as a specific entity/userid. the server then chooses the correct public key based on who the client is attempting to connect as. in the ssh case, it is found in the .ssh directory off the home directory of the userid (at the server) that the client is attempting to connect as. in the radius and kerberos scenarios ... it is a specialized database employed by those services.

... digital signature is a stylized process for using the private key for "encoding" a hash of the data ... with a corresponding digital signature verification process that uses the public key for "decoding" and checking the results (i.e. the server recalculates the hash of the same data and compares it against the result of "decoding" the digital signature).

in all the scenarios ... the connection is NOT being made as a non-differentiated, anonymous entity .... but as some specific entity known to the server. the server uses the entity specification in the connection authentication to select the appropriate public key.

for some additional discussion of digital signature authentication see the FIPS186-2 standard at the NIST site:
http://csrc.nist.gov/cryptval/dss.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Sun, 25 Jul 2004 09:31:35 -0600
Keith writes:
"They?" The only bail-out I remember was Chrysler, and that because it was a defense contractor, as well as employing a coupla-hundred-thou. Note that the bail-out was paid off with interest, long before it was due (not that I believe bail-outs are a good thing).

Of course now that DB screwed up the company I'll not buy another. I'm back to Ford, at least temporarily (we'll see what happens next week).


there was an article, i think in one of the DC papers in the 70s calling for an 100% unearned profits tax on the car manufactures. the claim was that the quotas on foreign imports was there to give the car manufactures some breathing room, and excess profits that they could plow back into remaking the industry (and therefor making it more competitive) ... but instead they took the excess profits and were paying it out in higher wages and stock dividends (and not becoming more competitive). there was some claim that the price of standard, best selling american cars had doubled in a very short period due to the quotas and the reduced competitive pressure.

the other was that the imports realized that given the quota limits, they could sell as many high end cars as they could sell low end cars. as a result, they took the opportunity to completely remake themselvs (again), and become even more competitive. as a result the quotas not only limited the number of imports competing with american cars ... but it resulted in a totally different import product mix .... that was no longer causing downward price pressure on american products.

around '90, one of the american automobile manufactures started the C4 project to remake completely how they did cars. one of the issues was that imports were taking only 3 years from concept to off the line ... while it was taking us manufactures 7 years from concept to off the line (there were sometimes two new lines being run in parallel, offset by 3 years ... so it appeared like new cars were coming on more frequently). the issue was that the difference between a 3 year cycle and a 7 year cycle ... was that it allowed the imports to respond more than twice as fast to changing customer preferences (as well as leverage new technology developments) ... and as a result having another significant competitive edge in the market place.

there were some number of big computer companies brought in to be part of the C4 effort. at the time, some of the mainframes also had nearly identical development cycles to the automobiles ... taking 7 years from concept to machine rolling out the door (and at times they also had offset, overlapping development efforts). the issue was how to leverage dataprocessing technology to help cut their elapsed development cycle by better than half ... and they were asking for advice from people that, themselves, had seven year development cycles. of course, at the time we were asked to be involved ... we were working on ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
which was in product area that had much shorter development cycles.

a couple past posts on automobile subject:
https://www.garlic.com/~lynn/2000f.html#41 Reason Japanese cars are assembled in the US (was Re: American bigotry)
https://www.garlic.com/~lynn/2000f.html#43 Reason Japanese cars are assembled in the US (was Re: American bigotry)
https://www.garlic.com/~lynn/2003o.html#34 Will Prescott work on Win64?
https://www.garlic.com/~lynn/2004b.html#52 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004c.html#51 [OT] Lockheed puts F-16 manuals online

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Basics of key authentication

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Basics of key authentication
Newsgroups: comp.security.ssh
Date: Sun, 25 Jul 2004 09:35:16 -0600
oh, and some slightly related posts on two-factor authentication:
https://www.garlic.com/~lynn/2004h.html#13 two-factor authentication options?
https://www.garlic.com/~lynn/2004h.html#14 two-factor authentication options?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Low Bar for High School Students Threatens Tech Sector

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Low Bar for High School Students Threatens Tech Sector
Newsgroups: alt.folklore.computers
Date: Sun, 25 Jul 2004 10:21:06 -0600
here is twist on the subject ... importing smart kids
http://www.foxnews.com/story/0,2933,126855,00.html

earlier post
https://www.garlic.com/~lynn/2004h.html#18 Low Bar for High School Students Threatens Tech Sector

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why are programs so large?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why are programs so large?
Newsgroups: comp.arch
Date: Sun, 25 Jul 2004 13:21:42 -0600
Bernd Paysan writes:
Which type of memory? top shows virtual, resident, and shared memory. If you don't have swapped things out, the resident memory size is the relevant one. Applications often map zeroed memory they don't really use (that's the "virtual" part), and linking to some shared libraries also increases the size.

i have a tab folder with over 60 URLs of news sites ... both early windows and linux versions ... you could watch the mozilla memory consumption skyrocket after clicking the folder. the linux mozilla has gotten a lot better ... while the windows version has only gotten somewhat better. linux version also appears to do a lot better job of giving back memory after tabs are closed. the current windows version may still have slight storage cancer (although nothing like both versions had when i first started stressing tabs in earlier versions). periodically killing the windows version does appear to recover memory that isn't otherwise given back after all tabs are killed.

what i believe to be 100k of screen data (per tab) looks like it can turn into 1.5-3 mbytes of virtual memory. i've sporacidally seen total virtual memory consumption (on windows) go from 160mbytes to over 360mbytes just by clicking the news folder (which works out to 200mbytes/60 = 3.3mbytes/tab). for some reason, linux versions seem to have gotten much more benign.

this is over and above base browser requirements ... including 16mbyte cache.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Sun, 25 Jul 2004 15:08:32 -0600
some additional drift:
http://news.moneycentral.msn.com/breaking/breakingnewsarticle.asp?feed=OBR&Date=20040725&ID=3917981

ford and gm making their profit on financial services ... not manufacturing ... in fact, ford actually lost money on manufacturing.

the article finishes up with comment that ford & gm facing cutthroat competition and declincing market share, as they lose sales even for many heavily discounted vehicles to asian rivals.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Mon, 26 Jul 2004 09:22:46 -0600
jmfbahciv writes:
I wouldn't have any idea where to start on that problem. Just getting the software update cycling from impossible to managable took years and lots of work. It would have been real nice to have had some way for the computer architect to have a piece of gear made that ran on real electicity so he could check how much his head was wedged. Software emulations were merely fairy tales.

we did that with cp/67 for 370s .... of course 360/370 operational specifications were very tightly specified in the principle of operations and the architecture "red book" ... in part because numerous different plants (even in different countries) were building to the same architecture specification (from possibly totally different technologies).

also ... the 370 mainly added new instructions in application/problem mode ... there were more significant differences in kernel/supervisor mode. in any case, cambridge/endicott had running virtual 370s under cp/67 on 360/67s ... a year before the first 370 engineering machine with virtual memory was cycling (in fact, the virtual 370 work was used to help validate the hardware).

there were sort of five levels.

1) real 360/67 hardware at cambridge

2) cp/67 (cp67-l) providing virtual 360 machines

3) a modified version of cp/67 (cp67-h) that ran in a virtual machine that provided 370 virtual machine. this was deemed necessary (rather than running it on the bare hardware) because the standard cambridge system was a pretty open time-sharing service that had some number of MIT, Harvard, and BU students accessing it. the idea was to avoid having students and other outsiders tripping over the 370 implementation.

4) a modified version of the cp/67 kernel (cp67-i) that utilized 370 hardware and virtual memory tables (which had a number of differences from 360/67 virtual memory tables). this ran in a 370 virtual machine provided by "level 3"

5) cms that ran in a virtual machine provided by "cp67-i" (virtual machine provided by "level 4").

in any case, this had interesting implications with projects that required a lot of security on the same time-sharing machine open to a lot of outsiders ... some misc. discussion of open time-sharing security issues:
https://www.garlic.com/~lynn/submain.html#timeshare

misc. past references to the virtual 370 effort:
https://www.garlic.com/~lynn/94.html#48 Rethinking Virtual Memory
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003o.html#23 Tools -vs- Utility
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future

misc. past references to architecture red book:
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2001m.html#39 serialization from the 370 architecture "red-book"
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003k.html#45 text character based diagrams in technical documentation
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#51 [OT] Lockheed puts F-16 manuals online

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Convince me that SSL certificates are not a big scam

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Convince me that SSL certificates are not a big scam
Newsgroups: sci.crypt
Date: Tue, 27 Jul 2004 17:02:14 -0600
"Gian-Carlo Pascutto" writes:
...SSL is just a (very slow) Diffie-Hellman key-exchange method. Digital certificates provide no actual security for electronic commerce; it's a complete sham.

SSL domain name certificates were designed to address the issue regarding the domain name that you typed into the browser and the domain name of the server (you are talking to) being somehow related aka things like ip-address hijacking possibly related to integrity issues with the domain name infrastructure.

recent posting on the subject:
https://www.garlic.com/~lynn/aadsm18.htm#15
https://www.garlic.com/~lynn/aadsm18.htm#14

lots of past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

one of the issues is that the certication authorities has to validate information with the authoritative agency for the information, in the case of domain name ownership ... it is the domain name infrastructure (the same infrastructure that has integrity issues resulting being one of the primary justifications for SSL certificates).

basically an entity registers who they are when they register a domain name with the domain name infrastructure. when they apply for a domain name certificate ... the certification authority has to perform a very expensive, complex and error prone process attempting to match the information from the certificate applicate with what is on file with the domain name infrastructure.

somewhat from the certification authority industry, there is proposal that the domain name applicant supply a public key that it is put on file with their domain name registration. then future communication between the domain name owner and the domain name infrastructure is digitally signed (which the domain name infrastructure can verify with the public key on file), minimizing threat like domain name hijacking and improving the integrity of the domain name infrastructure (so it can be better trusted by the certification authority industry).

Also when there are requests for SSL certificates, the applicant digitally signs the request. now, the certification authority just has to retrieve the public key on file with the domain name infrastructure to verify the certificate applicant's request. This has the advantage of turning a very expensive, complex and error prone entity information matching process into a much less expensive, simple and straight forward authentication operation.

there are some side-effects

1) if the integrity of the domain name infrastructure is improved, the demand (created by integrity concerns) for SSL domain name certificates are reduced

2) if the certification authorities can retrieve on-file public keys to validate communication .... then so can the rest of the internet ... subsuming the function provided by SSL domain name certificates ... the binding of a public key to a domain name.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

BLKSIZE question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: BLKSIZE question
Newsgroups: bit.listserv.ibm-main
Date: Wed, 28 Jul 2004 14:53:28 -0600
bblack@ibm-main.lst (Bruce Black) writes:
No, z/OS still supports only CKD disks, even when emulated on FBA as almost all are today.

z/Series now supports native SCSI and FICON connected FBA disks, but so far this is only supported in Linux and z/VM


remember that vm supported FBA disks at original introduction .... and, in fact, both CP paging and the CMS filesystem have block oriented logical access for the original implementation (circa cp/40 and cms on the hardware modified 360/40 with virtual memory ... pre 360/67 and the migration to cp/67 ... pre vm/370 and the migration to 370s).

In effect, both cp and cms have spent nearly the last 40 years emulating block operations on CKD devices.

one of the reasons that it was so easy to do the original xt/370 ... was that there was a one-of-one mapping from cp & cms disk blocks to dos filesystem disk blocks ... aka for i/o ... there was a form of inter-processor communition between the 370 board and the dos software on the xt-side (in effect all 370 i/o was emulated doing real i/o on the dos side).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ECC Encryption

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECC Encryption
Newsgroups: sci.crypt
Date: Wed, 28 Jul 2004 15:15:12 -0600
Nigel Smart writes:
Public key algorithms are mainly used to transmit symmetric keys which are then used to encrypt the main data. This is true of RSA and ECC.

For ECC key gen is fast, as is decryption. For RSA encryption is fast, but key gen and decryption are slow

For ECC the message size to send the symmetric key is MUCH smaller than the equivalent message for RSA

However, neither is better/worse. They are just different algorithms to achieve the same kind of thing.

Of course if you wanted to trasnmit a very small amount of data you "could" use RSA/ECC without the use of a symmetric cipher. But then it all depends on how small your data is.


another application is authentication & digital signatures ... like in fips186-2, dsa, ecdsa, etc. A huge number of the public-key ops that go on in the world involve the verification of CA digital signatures on domain name certificates (as part of SSL infrastructure).

the power requirements for ECC is significantly smaller than RSA. for chip-cards and hardware tokens implementation strong authentication ... they add possibly a 50-100 percent increase in chip circuits to try and get the RSA operations down to a second or two (1100 bit multipliers doing operations in parallel ... rather than an interative loops doing 16-bit or 32-bit operations) ... while ecdsa can be done on unenhanced chips within possibly a tenth of a second.

it isn't so much of an issue with contact (say iso7816) chips (other than the difference between 1/10th of a second to one or two seconds) since the (relatively) enormous power drain isn't that much of an issue.

However it does become significant issue with proximity chips (say iso14443) ... where power is being drawn from RF radiation in the air. Either you have to have the card awful close to an enormous RF radiating power source .... or if you plan on using type of power you would get swiping the card past a metro/transit turnstyle .... you are back up to possibly tens of seconds for RSA ... but ecdsa is still within the 1/10th or so of second with simple iso14443 proximity power profile.

the issue in the early 90s with chip cards was none of the chips had random number generators that were considered of high enuf integrity ... and both dsa and ecdsa require quality random number generation as part of the digital signature process (or the private key becomes vulnerable).

The trade-off was that RSA on these earlier chips was terribly slow .... or required a significantly more costly and power hungry chip, however it would be possible to process the message (to be signed) externally, include a large random nonce in the body of the message, calculate a secure hash of the message ... and simply pass the secure hash to the chip and get back the RSA digital signature (and there was no issue about whether or not the chip was capable of quality random number). however, both dsa and ecdsa required the circuits doing the digital signature also be capable of generating a high quality random number as part of the digital signature process.

chips started showing up in the late 90s that had random number capability that could be trusted for doing dsa & ecdsa digital signatures .... doing them enormously faster than RSA using significantly less expensive chips and much less power hungry.

random past mention of 14443
https://www.garlic.com/~lynn/aadsm12.htm#8 [3d-secure] 3D Secure and EMV
https://www.garlic.com/~lynn/aadsm12.htm#21 Smartcard in CD
https://www.garlic.com/~lynn/aadsm13.htm#15 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#18 A challenge
https://www.garlic.com/~lynn/aadsm15.htm#6 x9.59
https://www.garlic.com/~lynn/2000f.html#77 Reading wireless (vicinity) smart cards
https://www.garlic.com/~lynn/2002c.html#26 economic trade off in a pure reader system
https://www.garlic.com/~lynn/2002c.html#36 economic trade off in a pure reader system
https://www.garlic.com/~lynn/2002d.html#44 Why?
https://www.garlic.com/~lynn/2002h.html#76 time again
https://www.garlic.com/~lynn/2002h.html#77 time again
https://www.garlic.com/~lynn/2002m.html#39 Convenient and secure eCommerce using POWF
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2003h.html#54 Smartcards and devices
https://www.garlic.com/~lynn/2003j.html#66 Modular Exponentiations on Battery-run devices
https://www.garlic.com/~lynn/2003l.html#8 14443 protocol information
https://www.garlic.com/~lynn/2003l.html#64 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2003m.html#5 Cryptoengines with usage accounting
https://www.garlic.com/~lynn/2003n.html#25 Are there any authentication algorithms with runtime changeable
https://www.garlic.com/~lynn/2003o.html#63 Dumbest optimization ever?
https://www.garlic.com/~lynn/2004b.html#28 Methods of Authentication on a Corporate
https://www.garlic.com/~lynn/2004d.html#8 Digital Signature Standards

random past mention of fips186
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#17 Alternative to Microsoft Passport: Sunshine vs Hai
https://www.garlic.com/~lynn/aadsm11.htm#38 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#14 Challenge to TCPA/Palladium detractors
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm13.htm#30 How effective is open source crypto? (aads addenda)
https://www.garlic.com/~lynn/aadsm14.htm#31 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aepay10.htm#31 some certification & authentication landscape summary from recent threads
https://www.garlic.com/~lynn/aepay10.htm#34 some certification & authentication landscape summary from recent threads
https://www.garlic.com/~lynn/aepay10.htm#36 Identity server infrastructure ... example
https://www.garlic.com/~lynn/aepay10.htm#46 x9.73 Cryptographic Message Syntax
https://www.garlic.com/~lynn/aepay10.htm#65 eBay Customers Targetted by Credit Card Scam
https://www.garlic.com/~lynn/aepay10.htm#66 eBay Customers Targetted by Credit Card Scam
https://www.garlic.com/~lynn/2000b.html#93 Question regarding authentication implementation
https://www.garlic.com/~lynn/2001g.html#14 Public key newbie question
https://www.garlic.com/~lynn/2002e.html#65 Digital Signatures (unique for same data?)
https://www.garlic.com/~lynn/2002g.html#38 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002g.html#41 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002g.html#42 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002h.html#83 Signing with smart card
https://www.garlic.com/~lynn/2002i.html#10 Signing email using a smartcard
https://www.garlic.com/~lynn/2002i.html#78 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002j.html#21 basic smart card PKI development questions
https://www.garlic.com/~lynn/2002j.html#73 How to map a user account to a digital cert?
https://www.garlic.com/~lynn/2002j.html#82 formal fips186-2/x9.62 definition for eal 5/6 evaluation
https://www.garlic.com/~lynn/2002j.html#84 formal fips186-2/x9.62 definition for eal 5/6 evaluation
https://www.garlic.com/~lynn/2002j.html#86 formal fips186-2/x9.62 definition for eal 5/6 evaluation
https://www.garlic.com/~lynn/2002k.html#11 Serious vulnerablity in several common SSL implementations?
https://www.garlic.com/~lynn/2002k.html#35 ... certification
https://www.garlic.com/~lynn/2002l.html#38 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#44 Beware, Intel to embed digital certificates in Banias
https://www.garlic.com/~lynn/2002m.html#72 Whatever happened to C2 "Orange Book" Windows security?
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002n.html#14 So how does it work... (public/private key)
https://www.garlic.com/~lynn/2002n.html#16 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002n.html#20 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2003c.html#26 Crypto Standards Organizations
https://www.garlic.com/~lynn/2003d.html#18 Efficent Digital Signature Schemes
https://www.garlic.com/~lynn/2003g.html#70 Simple resource protection with public keys
https://www.garlic.com/~lynn/2003h.html#29 application of unique signature
https://www.garlic.com/~lynn/2003j.html#30 How is a smartcard created?
https://www.garlic.com/~lynn/2003k.html#38 Code Sizing for Digital Signature Verification - DSS/DSA
https://www.garlic.com/~lynn/2003l.html#5 Multiple ECDSA signatures with the same random nonce
https://www.garlic.com/~lynn/2003l.html#61 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2003n.html#23 Are there any authentication algorithms with runtime changeable key length?
https://www.garlic.com/~lynn/2003n.html#25 Are there any authentication algorithms with runtime changeable
https://www.garlic.com/~lynn/2003n.html#32 NSA chooses ECC
https://www.garlic.com/~lynn/2004.html#37 When rsa vs dsa
https://www.garlic.com/~lynn/2004b.html#22 Hardware issues [Re: Floating point required exponent range?]
https://www.garlic.com/~lynn/2004f.html#10 racf
https://www.garlic.com/~lynn/2004h.html#12 ECC book reference, please
https://www.garlic.com/~lynn/2004h.html#21 Basics of key authentication

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Usage of Hex Dump

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Usage of Hex Dump
Newsgroups: comp.arch
Date: Wed, 28 Jul 2004 18:16:19 -0600
jrefactors@hotmail.com (Matt) writes:
I want to know what is Hex Dump? I tried to search in google but didn't get useful results. I know it is the hexademical representation of something. But I don't know what is something? From programmer's perspective, when do we need to use hex dump? what's the importances of hex dump?

sounds like a class assignment to me.

i first ran into it back in the 60s ... when the operating system failed it would start copying all of computer memory to the printer (quickly from memory):

six byte hex address .. plus space, 7 places groups of 8char hex .. plus space, 9*8 72 places character representation ... 32 chars ..... 111

there is something like another 9 places in there ... to come out to 120 character line per 32bytes of memory; 66 lines per page gives 2112 bytes per page (little better than 2k per page). all on green bar paper.

couple refs pulled quickly from search engine on emulating green bar paper.
http://www.experts-exchange.com/Databases/MS_Access/Q_10216760.html
http://www.makingpages.org/pagemaker/tips/greenbar.html
http://www.pdp8.net/images/greenbar.shtml

simulation of green bar paper can be seen at the vmshare archives website
http://vm.marist.edu/~vmshare/

try doing search on "hex dump" at vmshare archives ... or possibly: hex dump paper.

share is the ibm customer user group that has been around forever
http://www.share.org/

starting in the mid-70s, tymshare hosted on line computer conferencing for the vm group at share ... called vmshare. tymshare was an ibm vm/370-based commercial time-sharing service ... some comments about time-sharing systems from that era
https://www.garlic.com/~lynn/submain.html#timeshare

doing search engine of simply "hex dump" turns up all sorts of explanations.

doing search engine of "hex dump" and IBM, the first reference is
http://www.planetmvs.com/hex/

which has detailed description and shows a "half-wide" hex dump format on the page i.e. 16 byte per line format instead of 32 byte per line format (which needs 120 wide display; 16 byte per line fits in less than 80 char wide display).

in addition to the very first returned URL ... there appear to be another hundred or so that go into some useful detail about hex dump.

In the very early 80s I had written a software debugging tool in REX(X) that somewhat mitigated the need for having paper copy of kernel image on a failure.
https://www.garlic.com/~lynn/submain.html#dumprx

at one time it was in use by the majority of the internal data centers as well as significant number of PSRs handling customer problems. the following isn't specifically about internal datacenters ... but there are a lot of references to the internal datacenters on the internal network (and the internal network being larger than arpanet/internet from just about the beginning until possibly sometime mid-85)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Basics of key authentication

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Basics of key authentication
Newsgroups: comp.security.ssh
Date: Thu, 29 Jul 2004 13:28:28 -0600
jerup3500@yahoo.com (Jonathan) writes:
I am IT student at a nation college.We have been talking in class about both public and private keys the question I have to pose is whether or not using the same public or private key is possible to use on differnt operating system in a complex network over a WAN.

seems like at the start of each new school year ... there are a rash of classroom work assignments posted to usenet in attempt to get somebody else to do their work for them.

recent example from comp.arch:
https://www.garlic.com/~lynn/2004h.html#31

some earlier posts in this thread
https://www.garlic.com/~lynn/2004h.html#21 basics of key authentication
https://www.garlic.com/~lynn/2004h.html#23 basics of key authentication

random past threads discussing use of usenet to do class work assignments:
https://www.garlic.com/~lynn/2001.html#70 what is interrupt mask register?
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#25 Use of ICM
https://www.garlic.com/~lynn/2001k.html#75 Disappointed
https://www.garlic.com/~lynn/2001l.html#0 Disappointed
https://www.garlic.com/~lynn/2001m.html#0 7.2 Install "upgrade to ext3" LOSES DATA
https://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts)
https://www.garlic.com/~lynn/2002c.html#2 Need article on Cache schemes
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#40 e-commerce future
https://www.garlic.com/~lynn/2002g.html#83 Questions about computer security
https://www.garlic.com/~lynn/2002l.html#58 Spin Loop?
https://www.garlic.com/~lynn/2002l.html#59 Spin Loop?
https://www.garlic.com/~lynn/2003m.html#46 OSI protocol header
https://www.garlic.com/~lynn/2003n.html#4 Dual Signature
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Thu, 29 Jul 2004 16:52:35 -0600
radical change at chrysler?
http://www.forbes.com/home/free_forbes/2004/0816/058.html

An automaker and it union workers are gambling that an innovative factory concept will rescue them both from extinction.

recent related posts:
https://www.garlic.com/~lynn/2004h.html#22 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004h.html#26 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Which Monitor Would You Pick??????

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Which Monitor Would You Pick??????
Newsgroups: alt.folklore.computers
Date: Sat, 31 Jul 2004 06:26:53 -0600
"Gary A. Gorgen" writes:
In the '60's the code that controlled a timesharing system was known as monitors. Tops-10 & Tops-20 from DEC. The timesharing monitor that ran on the SDS-940, was known as "The Monitor". Its original name was "Genie". Monitor was also used, for the code in development boards, "Debug Monitor", that was in the '70's, IIRC.

and/oor control programs .... cp/40 circa 1965 for the modified 360/40 with virtual memory was control program/40. the interface for users was cms or cambridge monitor system. cp/40 was move to 360/67 and called cp/67. when 370s came out ... cp/67 morphed into vm/370 (virtual machine 370) and cms was renamed cms ... conversational monitor system. most of this was done at cambridge science center, 545tech sq:
https://www.garlic.com/~lynn/subtopic.html#545tech

some number of operations used it for commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

fc2, evolution, yum, libbonobo-2.6.2-1.i386.rpm

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: fc2, evolution, yum, libbonobo-2.6.2-1.i386.rpm
Newsgroups: linux.redhat.misc
Date: Sat, 31 Jul 2004 13:01:01 -0600
i've been running evolution 1.4.x happily thru 9, fc2, and fc2.

however, yesterday yum updated libbonobo-2.6.2-1.i386.yum

and now evoulation gives me four popups:

Cannot activate component OAFIID GNAME_Evolution_Mail_ShellComponent (or something similar). The error from the activatioin system is: Unknown CORBA excpetion.

and then doesn't show anything useful in the evoluation window. This is under KDE ... which has been working fine all along. Under GNOME the error message is different but the results are the same.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Vintage computers are better than modern crap !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vintage computers are better than modern crap !
Newsgroups: alt.folklore.computers
Date: Sat, 31 Jul 2004 13:19:19 -0600
Keith writes:
The article I was reading (sorry don't remember where) referred to the advancements in CAD and plastics manufacturing that allows small production runs. It wasn't addressed in the article, but I wondered about replacement support (accidents and such). Oehrwise consumer insurance costs may be the real barrier.

all sorts of things are barriers; getting effectively semi-custom models to the correct car dealer and customer.

i've heard tales of models with heavy air conditioning showing up in maine dealers instead of alabama ... and models with seat heaters showing up at miami dealers (instead of minn.)

then there is the whole repair & maint. issue ... if special training and/or stocking any custom parts.

they've worked out flexible manufacturing processes that can turn them out ... but then there is the whole rest of the operational infrastructure (once they leave the plant).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Basics of key authentication

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Basics of key authentication
Newsgroups: comp.security.ssh
Date: Sat, 31 Jul 2004 15:55:43 -0600
"Old Man" writes:
Part of what they are "supposed" to do in American colleges is to teach students to think. So,

Question: If you download the OpenSSH 3.2 package on your machine, and I download OpenSSH 3.2 on my machine, then why wouldn't ssh-keygen generate a compatible key on both systems, and decode both keys equally? (Package -- no compiling it to be your way.)


even better if you have a several hundred thousand webservers across the world supporting https/ssl ... on a wide variety of different platforms and several hundred million clients around the world accessing those webservers with SSL ... are the SSL public/private key operations really working ... os is it a figment of everybody's imagination?

random reference to ssl, https, electronic commerce, etc
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

previous posts in this thread:
https://www.garlic.com/~lynn/2004h.html#21
https://www.garlic.com/~lynn/2004h.html#23
https://www.garlic.com/~lynn/2004h.html#32

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

build-robots-which-can-automate-testing dept

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: build-robots-which-can-automate-testing dept.
Newsgroups: alt.folklore.computers
Date: Sun, 01 Aug 2004 11:04:40 -0600
build-robots-which-can-automate-testing dept.
http://it.slashdot.org/it/04/07/31/1819250.shtml?tid=185&tid=4
I did something like this for the resource manager product. we had performance and workload profile data from thousands of systems ... so we built a parameterised benchmarking/testing infrastructure. we predefined something like 1000 benchmarks that selected statistical samples from range of workloads and configurations. we then had an apl workload and performance model. after the first 1000 or so benchmarks, all the results were feed into the apl performance and workload model which got to pick the next set of workload and configuration parameters for the next benchmark ... this was automated and turned loose. In all approximately 2000 benchmarks were run taking three months of elapsed time to calibrate and validate the operation of the resource manager before it shipped to customers.

...>http://it.slashdot.org/comments.pl?sid=116421&cid=9854377

some benchmarking methodology references:
https://www.garlic.com/~lynn/submain.html#bench

various performance, scheduling and resource manager references:
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

SEC Tests Technology to Speed Accounting Analysis

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: SEC Tests Technology to Speed Accounting Analysis
Newsgroups: alt.folklore.computers
Date: Sun, 01 Aug 2004 11:46:57 -0600
SEC Tests Technology to Speed Accounting Analysis
http://www.reuters.com/newsArticle.jhtml?type=reutersEdge&storyID=5836818

EDGAR Online XBRL
http://xbrl.edgar-online.com/x/
Extensible Business Reporting Language (XBRL)
http://xml.coverpages.org/xbrl.html
Welcome to XBRL International
http://xbrl.org/
AICPA XBRL introduction
http://www.aicpa.org/trustservices/ecommentnewsletterbi402.htm
XBRL: The Universal Language For Financial Business Reporting
http://www.aicpa.org/pubs/cpaltr/oct2000/supps/gov1.htm
Is XBRL an Answer?
http://www.aicpa.org/pubs/cpaltr/may2002/supps/audit6.htm
XBRL: The Language of Finance and Accounting
http://www.xml.com/pub/a/2004/03/10/xbrl.html

some "ML" history from 545tech sq cambridge science center (i've been a "ML" programmer for going on 35 years):
https://www.garlic.com/~lynn/submain.html#sgml
general science center references:
https://www.garlic.com/~lynn/subtopic.html#545tech

IBM Cambridge Scientific Center TR 320-2094
http://www.sgmlsource.com/history/G320-2094/G320-2094.htm
Charles F. Goldfarb's SGML SOURCE HOME PAGE
https://web.archive.org/web/20230930225452/http://www.sgmlsource.com/
The Roots of SGML -- A Personal Recollection
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
Charles F. Goldfarb's All the XML Books in Print
http://www.xmlbooks.com/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Which Monitor Would You Pick??????

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Which Monitor Would You Pick??????
Newsgroups: alt.folklore.computers
Date: Sun, 01 Aug 2004 22:07:48 -0600
"Charlie Gibbs" writes:
And then, of course, there was the control program for microcomputers: CP/M.

and references that he possibly lifted it from cp/67 ... when he was working at npg using cp/67 system in the early '70s. frequently cp-67/cms has been appreciated cp/cms.

previous mention
https://www.garlic.com/~lynn/2004b.html#5 small bit of cp/m & cp/67 trivia from alt.folklore.computers n.g. (thread)
https://www.garlic.com/~lynn/2004e.html#38 [REALLY OT!] Overuse of symbolic constants

which references
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html

including quote from the above:
And, page 61, you can find the following sentence: "The particular example shown in Figure IV-6 resulted from execution of PLM1 on an IBM System/360 under the CP/CMS time-sharing system, using a 2741 console."

Conclusion

I can therefore affirm that the name CP/M is coming from the CP/CMS Operating System used on the IBM System/360 used at the Naval Postgraduate School of Monterey, California, in 1972, when Gary Kildall was writing the PL/M compiler to program the Intel 8008 CPU, which led him to write the Disk Operating System known as CP/M (that MS-DOS copied) (that was patterned after the commands of the Operating System of the DECsystem-10 used inside Intel), in order to have a resident DOS under which to run PL/M on one Intel MCS-8 computer system.


....

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Interesting read about upcoming K9 processors

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting read about upcoming K9 processors
Newsgroups: comp.arch
Date: Mon, 02 Aug 2004 14:39:53 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes. As was IRIX on the MIPS, Solaris on SPARC and x86, AIX on POWER and Linux on x86.

there were a number of aix's ....

aix/370 + aix/ps2 was port of UCLA's locus to 370&ps2.

AIXv2 was a port of AT&T unix (by the company that had done the AT&T port for ibm's pc/ix) to ROMP (when the ROMP displaywriter follow-on project was killed).

BSD was also ported to ROMP (aka pc/rt) and called AOS.

AIXv2 was upgraded to AIXv3 in the transition from ROMP to RIOS (aka power).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Interesting read about upcoming K9 processors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting read about upcoming K9 processors
Newsgroups: comp.arch
Date: Tue, 03 Aug 2004 13:35:51 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Nope. THAT is wrong. Yes, USL had come to its senses, and so the companies you mention could (and did) sign contracts for rights. But IBM and HP did NOT leave OSF to DEC - they ALL pulled the plug on OSF, as the sort of organisation you pour money into and get nothing working out of. Remember that OSF also perpetrated Motif (and the early versions made X11R2 look good), and other products that were simply laughed out of the market. But all of AIX, HP-UX and Tru64 and (in some sense) derived from OSF/1, though there is probably almost nothing of it remaining in any of them.

OSF was also doing DCE ... including trying to merge andrew file system, some stuff from locus (that was in aix/370/ps2) and IBM's aix distributed filesystem (in addition to the andrew window/wigets plus X for motif).

ibm had co-funded athena with dec ... each to the tune of $25m; however ibm had directly funded cmu for $50m ... the organization that did mach, andrew, camelot, etc. there is a joke that ibm paid for transarc three times .... the original cmu funding, significant inventment when it spun off from cmu, and again when it bought transarc outright.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hard disk architecture: are outer cylinders still faster than inner cylinders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hard disk architecture: are outer cylinders still faster than inner cylinders?
Newsgroups: comp.arch
Date: Wed, 04 Aug 2004 07:44:01 -0600
Joe Seigh writes:
It used to be the middle cylinders based on when software did the arm positioning and the middle cylinders being visited twice as often as the inner and outer cylinders. Lynn Wheeler can probably give a lecture on that if he spots this.

Current disks that do command queueing do their own positioning so you don't have as much control. You can specifiy the block numbers and if you know the physical order in which they're assigned you could probably still get the middle cylinders. But disks do reassign blocks to get around disk defects and with raid and logical volume managers, you often have no idea where the physical location of your blocks really are or what disk even.


way back when .... disk access latency optimization tended to be a lot more significant for frequently used data because of extremely small real memories, impacting the ability to cache frequently used stuff.

so you identify the highest used stuff and try and create locality of reference for arm motion. one could claim (that if arm motion dominates) ... avg arm distance traveled becomes driving factor in optimization. placing highest used data in the middle cylinders and arraying data on either side in decreasing order of use .... will tend to result in minimizing overall arm avg seek/movement distance. It is sort of analogous to placing supply depot in the center of a region. over the years ... this has become less of an issue as real memory sizes have increased and the ability to use caching for highly used data has become more prevalent.

when you built the system, executables were effectively laid out on disks at static locations. you could do a lot of frequency of use analysis stuff ... but you couldn't actually tell the filesystem a physical location of where to place stuff. however you did know the relatively straightforward logic that the filesystem allocation used.

so way back in these dark ages ... i spent some amount of time carefully re-organizing system build process ... so that the order that executables got written to disk resulted in optimizing their physical location on disk. For some common workloads, I was able to increase overall effective system thruput by 300 percent with such techniques (since things were so heavily sensitive to disk arm latencies).

the other optimization trick was trying to transfer as much data as possible per revolution. back when there was uniform data per cylinder, there were some optimization tricks involving laying data out uniformly on tracks ... so if there was concurrent requests for data on the same cylinder/arm position ... but not necessarily consecutively ... the transfer logic could switch heads; this was back in the days of 15-20 platter/head drives. in some cases you might have multiple queued requests for different data on the same cylinder/arm-position ... but on different tracks. There were some electronic processing latency associaed with selecting transfers from different heads. You could carefully re-organize the queue of requests to try and maximize the transfer of data/revolution ... based on knowing rotational position start/end locations of records.

Another part of this technique involved disks that allowed actual physical record formating of data on the tracks. If you uniformly layed out the same sized records on each track ... with start/stop physical positions the same on each track ... it was normally not possible to accomplish a head switch operation in the rotational latency. So a technique was developed of sacrificing some of the total track data capacity by formating dummy micro-records between each standard data record. The insertion of the dummy micro-records would increase the rotational latency between the end of one normal data record and the start of the next normal data record ... which might be sufficient to mask the electronic latency involved in switching active heads. An example was formating 3 4k records per 3330 track. There were 19 tracks per cylinder ... and you might have queued requests for

record zero on track 19, record one on track 15 record two on track 12

the softare would attempt to organize the transfer request so all three records were processed in a single revolution.

various archaic posts about presentation (that I made as an undergraduate at industry user group meeting) on work done manually redoing system build processes in order to optimize physical position of frequently used executables on disk:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

fc2, ssh client/server, kernel 494

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: fc2, ssh client/server, kernel 494
Newsgroups: linux.redhat.misc
Date: Wed, 04 Aug 2004 11:49:51 -0600
i've been running fc2 kernel-smp-2.6.6-1.435.2.3 for quite awhile

however, ever since yum update to kernel-smp-2.6.7-1.494.2.2

and reboot yesterday ... i've had client SSH connections to the machine consistently experiencing hangs (possibly something causing problems for ssh server/demon in kernel 494?)

nothing else seems to be affected.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

what vector systems are really faster at

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what vector systems are really faster at
Newsgroups: comp.arch
Date: Wed, 04 Aug 2004 12:15:09 -0600
Maynard Handley writes:
I don't know. Greg's answer strikes me as the first I've seen that actually makes sense. Certainly when I (thinking of processors as they are now) saw descriptions of vector processors, I just didn't see what the big deal was. They did NOT (unlike say AltiVec or MMX) load in a long vector in one cycle, perform a bunch of ops, again in one or a few cycles, then write out the result; instead they, once per cycle, loaded in a value, operated on it, and wrote it out, with a little pipelining between stages. Say what? So the big thrill of vector processors is that I can perform one flop/cycle? This may have been exciting back in the days before every processor down to the one in my microwave oven had pipelining and an I-cache, but to someone who expects the processor in his PDA to be not just pipelined but superscalar, it's really not that impressive.

On the other hand, the memory characterization is precisely what is important. All of which would seem to imply that if one wants, usefully, to talk about this sort of class of machine, one would do better to call it a "high-bandwidth memory" machine, or some more marketing-wizzy term, rather than using the essentially meaningless term "vector processor".


an issue is that running vectorized application may also contribute to organizing data patterns to optimize the available memory bandwidth.

i'm aware of statements about at least one machine that had highly optimized non-vector floating point execution and memory access ... that would already saturated the available memory bandwidth ... and when vector support was added to the machine, it showed no additional floating point operation increase (but adding vector mode support possibly provided some marketing hype)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

self correcting systems

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: self correcting systems
Newsgroups: alt.folklore.computers
Date: Thu, 05 Aug 2004 05:41:51 -0600
US Theatens to cap flights at chicago O'hare
http://www.reuters.com/newsArticle.jhtml?type=topNews&storyID=5875963

self-correcting systems will tend to absorb &/or adapt to glitches. For systems operating at saturation, it is much more likely that glitches result in negative feedback loops, where elapsed time amplifies the effect of things like delays; delays early in a cycle will tend to lengthen and get worse as the cycle progresses (instead of being damped and disappearing).

past threads about the effects of operating o'hare at saturation:
https://www.garlic.com/~lynn/2000b.html#73 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2000b.html#74 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2003o.html#27 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#33 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004.html#3 The BASIC Variations

an example of natural self-correcting system (when operating under nominal conditions) is clock page replacement algorithm which I original came up with as an undergraduate:
https://www.garlic.com/~lynn/subtopic.html#wsclock

as new pages are requested, the selective examines pages in a loop. if a page is found that hasn't been referenced, it is selected for replacement and the process stops/checkpoints (where it resumes the next time a page is needed). if a page is referenced, the referenced indicator is reset and the process continues on to the next page. As more pages are needed, the process will tend to run thru all pages faster, resetting page reference bits. moving thru pages faster, will tend to mean that the elapsed time between resetting a reference bit and the next time a page is examined is shorter. cutting the elapsed time between examining a page will tend to increase the probability that the page hasn't been referenced.

so when more pages are needed for replacement, the algorithm tends to (naturally/automatically) cut the elapsed time between the times that pages are examined ... cutting the elapsed time will increase the probability that pages haven't been referenced, increasing the probability that pages haven't been referenced will tend to increase the number of pages availability for replacement.

however, if too many pages are being produced for replacement ... clock will tend to examine & reset fewer pages per each replacement event. if fewer pages are examined per replacement event, it will tend to take clock longer to cycle thru all pages, increasing the interval between the time a page is reset and the next time it is examined. increasing the interval/elapsed time between when a page is reset, and the next time it is examined will increase the probability that it has been referenced, increasing the probability that pages get referenced tends to produce fewer unreferenced pages for the page replacement algorithm.

so if the algorithm is producing too many pages, it tends to slow down and therefor lengthen the interval between the time a page is reset and the next time it is examined ... tending to increase the probability pages will be referenced and therefor producing fewer pages. if the algorithm is producing too few pages, it tends to speed up and therefor shorten the interval between the time a page is reset and the next time it is examined ... tending to decrease the probability pages will be referenced and therefor producing more pages.

the operational characteristics of the wsclock algorithm implementation tends to naturally adjust the system, without needing any explicit control mechanism for adjusting the interval period between when pages are reset and the next time they are examined (in order to optimize the production of pages for changing replacement rates).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

very basic quextions: public key encryption

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: very basic quextions: public key encryption
Newsgroups: comp.security.ssh
Date: Thu, 05 Aug 2004 16:51:09 -0600
walterbyrd@iname.com (walterbyrd) writes:
Q: What exactly is my private key? How do I get it? Where is it stored? Do I only use it once? Do I have some special program that generates it? Do I have to be using a specific software application that is in sync with the senders software application?

the fundamental technology is asymmetric key cryptography ... a key pair ... where either key is used for encoding and you need the other of the key pair for decoding (as opposed to symmertic key cryptography uses the same key for both encoding and decoding). the generation is somewhat more envolved than secret key generation since there is a complex relationship between the two keys in an asymmetric key pair.

public key cryptography is a business process using asymmetric key cryptography technology. one of the key pair has a business designation of "private" and the other of the key pair has the business designation of "public".

this is can be used to address two business problems/opportunities in symmetric key cryptography

1) secure key distribution

the designation of "public" (in theory) means you don't care who is allowed to know your public key ... it can be sprayed all over the world so that everybody has it. this actually only addresses the security confidentiality problem of hiding the key. there is still the security integrity problem of whether somebody actually knows any specific public key is yours.

2) needing a unique shared-secret (symmetric key) for every relationship

since everybody can know your public key ... there is no security confidentiality requirement for a unique shared-secret for every relationship ... say worst case (for shared-secret symmetric key): N*(N-1)/2 .. where N is the number of people in the world (and every key might have to be changed every month). The problem reduces to N asymmetric key pairs (or 2*N keys).

... so the full process ... replacing an exchanged share-secret symmetric key ... we each have the other's public key. Instead of symmetrically encrypting the message, where you know only I could have encrypted the message and I know only you can decrypt the message ....

1) I compute a secure hash (say fips180) of the message and encode the secure hash with my private key. this is typically referred to as the digital signature

2) i combine the message and the digital signature ... and encrypt the combination with your public key.

3) i transmit the message

4) only you, with your corresponding private key are capable of decrypting the message (nobody else can see it)

5) once you have decrypted the message, you verify the digital sigatnure with my public key; i.e. a) decode the digital signature with my public key to get the original secure hash, b) recalculate the secure hash on the message, c) compare if the recalculated secure hash and the decoded digital signature secure hash are identical.

only your private key could decode the message and only my public key can verify the digital signature.

If there isn't a concern about the secrecy of the data, then it is possible to only digitally sign it ... this still gives you whether or not the data has been modified in transit (integrity) and the verifies the origin.

If there isn't a concern about the origin of the data, then it is possible to simply encrypt the data w/o a digital signature. This is sort of like when you send off you credit card number in a SSL encrypted session.

SSL does an additional modification. asymmetric encryption tends to be a lot more expensive than symmetric encryption. So when the client-side starts up ... it generates a random session secret key ... which it uses to encrypt the actual data ... and then the session secret key is encrypted with the server's public key. Since only the server's private key can decrypt and obtain the random session secret key ... it is still viewed as public key operation (from the stand-point of key distribution problem) ... but the performance is that of symmetric cryptography since the server then decrypts the actual data with the randomly generated session secret key (that they got from the client). The server doesn't actually know who the client is ... but the client is (pretty) sure that only the server (with the correct private key) will see the actual data.

sender/receiver or client/server ... at least need to have the other's public key ... and have support for commonly selected cryptography algorithms (i.e. there can be variables like key sizes and/or specific type of asymmetric cryptography technology).

recent thread discussing RSA and ECC asymmetric cryptography technology issues:
https://www.garlic.com/~lynn/2004h.html#30

recent thread discussion what might a digital signature actually mean:
https://www.garlic.com/~lynn/2004h.html#13
https://www.garlic.com/~lynn/2004h.html#14

recent thread in this (comp.security.ssh) newsgroup about basics of (public) key authentication
https://www.garlic.com/~lynn/2004h.html#21
https://www.garlic.com/~lynn/2004h.html#23
https://www.garlic.com/~lynn/2004h.html#32
https://www.garlic.com/~lynn/2004h.html#37

which included some side-thread about whether some of these questions were class assignments ... and the etiquette of using usenet for doing homework

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Thu, 05 Aug 2004 17:05:10 -0600
amicrypt@amishare.com (Allen Pulsifer) writes:
Enclosed is a paper discussing a new method to authenticate the exchange of public keys without using digital certificates. The protocol has one step involving human intervention, specifically, it requires human operators to verify the identity of one another and compare two short strings.

The primary use for this protocol would be to bootstrap a secure channel. Remarkably, we have found no papers or documented protocols on how to achieve this.


no documented protocols for secure key exchange or no documented protocols for secure public key exchange.

in general there is a lot written on secure or out-of-band channels for secure key exchange ... mostly having to do with symmetric keys. i just finished some comments (in another n.g.) about asymmetric vis-a-vis symmetric with regard to this subject:
https://www.garlic.com/~lynn/2004h.html#47

the issue for symmetric key exchange is both hiding the keys and can the exchange be trusted ... while asymmetric key exchange may eliminate hiding the keys ... but doesn't eliminate the problem.

In fact, all the root trust keys ... even in PKI & digital certificates paradigm have this issue ... however, frequently they leave the issue of how the environment is initially populated with the initial root trust keys as an exercise for the student (or it is taken for granted as magically happening).

at its simplest ... one could claim that the whole PGP environment implements such an infrastructure ... the ability to perform public key exchange w/o requiring certificates from certification authorities trust roots.

as an aside ... i've long advocated "naked" public keys and that certificates frequently are redundant and superfluous.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Univac 9200, 9300: the 360 clone I never heard of!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Univac 9200, 9300: the 360 clone I never heard of!
Newsgroups: alt.folklore.computers
Date: Thu, 05 Aug 2004 15:59:10 -0600
hancock4@bbs.cpcn.com (Jeff nor Lisa) writes:
The S/360 Spectra architecture was character (byte) oriented. Univac's product line used word oriented. So Univac set up a separate product line, the 9000 series. These were basically low end S/360s, but with their own operating system. Ours used RPG as a programming language (also available on S/360 though I don't think that widely used).

I believe that ibm RPG was widely used in some market segments ... but the overall s/360 market was so large that it dwarfed (and obscured) the part that used RPG ... even if the ibm RPG market segment was larger than other vendor's RPG market size.

i've made a similar assertion about ibm time-sharing ... which seems to get overlooked with all the attention paid to ibm commercial batch processing ... even if the ibm time-sharing market size was significantly larger than other companies ... it gets lost being dwarfed by the commercial batch processing market size.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Thu, 05 Aug 2004 17:33:47 -0600
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:

https://www.garlic.com/~lynn/subtopic.html

That page contains lots and lots of links. Could you please tell which is the relevant one for the current issue?


mea culpa, finger (brain?) slip
https://www.garlic.com/~lynn/2004h.html#47

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Thu, 05 Aug 2004 20:28:49 -0600
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
Do you simply mean having CAs is 'actually' no better than having none but employing the 'PGP environment' or do you mean that the latter allows real certification (knowledge of authenticity of the public keys of the persons involved) to be achieved beyond 'any' doubt? Thanks.

many of the current deployments, you have root trust keys delivered magically to you ... with much less control and knowledge than you normally exercise in loading PGP keys.

the current browsers have magic tables of trust root keys ... for which the individual has little or no awareness.

however, the (CA) tables of keys (at a client) are equivalent to PGP tables of keys ... and theoretically an individual has the opportunity of exercising as much control over the CA tables of trust root keys (as they exercise over the PGP keys that they have loaded)

so the whole PKI/CA scenario is a chain of trust .... and the overall trust in a chain of trust is less than or equal to the trust in the trust root; where the trust root is, at best equivalent to the PGP process ... and frequently much worse.

now, the PKI/CA scenario, in addition to being dependent on the trust root (which can be the key tables preloaded into browsers ... and at best, functionally equivalent to PGP key tables where individuals have exercised some direct control) ... have a whole bunch of additional processes ... each having various exploits and vulnerabilities.

so the base PKI/CA scenario has
T(pki) <|= T(pgp)

where T(pki) represents the trust for the typical PKI trust root .... plus PKI/CAs have a whole bunch of additional processes (total strangers following some number of unknown processes to vet some number of other total strangers) ... each with their own unique vulnerabilities and exploits.

so one way of expressing the actual PKI/CA infrastructure trust is to sum the vulnerabilities in the additional PKI/CA processes:
SUM(Vi, i=1,n)

where Vi is some additional vulnerability for some PKI/CA business process.

then the overall PKI/CA trust would be
T(pki) - SUM(Vi, i=1,n)

i.e. the base trust for the PKI trust root infrastructure minus all the possibility vulnerabilities for the PKI-unique additional business processes.

so I assert that T(pki) can be made at best equivalent to T(pgp), if an individual exercises all the processes before loading PKI root keys into their tables as they exercise for loading PGP keys into their tables ... and that
T(pki) <|= T(pgp)

and therefor it is highly likely that
T(pki) - SUM(Vi, i=1,n) < T(pgp)

The other way of stating it is that PGP is a lot more KISS and therefor has much fewer ways of failing ... while PKI infrastructures frequently have large number of unknown processes implemented by total strangers ... and just because of the greater complexity has a lot larger number of ways of failing.

That is almost totally separate from my standard naked public key argument for online environments.

Nominally, I would claim that CA/PKI original design point was for offline environment when the relying party had no other recourse to available information ... and therefor stale, static certificates were better than nothing (sort of like letters of credit from the ancient sailing ship days).

Furthermore, the majority of the TTP CA/PKI operations haven't even been the authoritative agency for the information that they are certifying.

In any case, most operations are rapidly transforming themselves into online operations (if they haven't already), where relying parties can have online access, to timely, non-stale, and superset of information that might be contained in a certificate. For example, most of the certificate-based protocols for financial operations from the mid-90s .... would either involve

1) forcing operations that had been online for 20-30 years ... back into an offline paradigm (i.e. in a practical sense having them regress their mode of operations by 30 years)

2) or transmitting certificates as part of an operation that continued to be online ... but effectively ignoring the certificate and using the online information in its place ... making the certificate redundant and superfluous. or possibly not quite ... in some scenarios the redundant and superfluous certificate would increase the transaction payload by two orders of magnitude (for no otherwise useful purpose but to increase bandwidth utilization).

so slightly related is a thread i started on dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)

and for total topic drift ... various mentions specifically about redundant and superfluous certificates causing two orders of magnitude payload bloat:
https://www.garlic.com/~lynn/aadsm13.htm#10 X.500, LDAP Considered harmful Was: OCSP/LDAP
https://www.garlic.com/~lynn/aadsm15.htm#5 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm17.htm#4 Difference between TCPA-Hardware and a smart card (was: examp le: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#41 Yahoo releases internet standard draft for using DNS as public key server
https://www.garlic.com/~lynn/aadsm17.htm#54 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#5 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aepay10.htm#76 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/2000f.html#15 Why trust root CAs ?
https://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates
https://www.garlic.com/~lynn/2003g.html#47 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003k.html#66 Digital signature and Digital Certificate
https://www.garlic.com/~lynn/2004g.html#5 Adding Certificates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 08:01:46 -0600
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
Thank you for the clarification. I have however one probably rather dumb question: You have said quite a bit about T(pki) but nothing about T(pgp) in my view. What makes up T(pgp), i.e. what does it consist of in concrete terms (and how are people to have/generate trust with respect to them)?

so i ask to exchange keys with somebody ... we exchange keys that possibly happens over a period of several days. i then check there key with some key server ... again, random events over a period of several days. this is for relatively low value consideration. for man-in-the-middle attack ... somebody has to be constantly monitoring all of my packets and all of their packets ... and looking for very specific packets out of a whole load of different packets to modify/replace.

while such a extended man-in-the-middle attack isn't impossible ... the cost to mount such an attack is going to be sufficiently larger than the benefit ... i.e. they have to identify those specific things that are the public key exchange ... and replace the public key values ... and then monitor all future traffic and carefully replace all traffic that involve the specific public key operations. these are for the things that would be relatively low value operations.

for higher value situations the key's fingerprint is exchanged out of band.

in the secret/symmetric key scenario ... all you have to do is evesdrop ... since just knowning the value ... allows for both impersonation as well as future evesdropping. for public key spoofing, the attack is much more massive ... since just knowing the public key isn't sufficient ... that carefully crafted public key replacement has to be performed ... and then every possible future public key operation has to be intercepted and replaced.

so one can talk about man-in-the-middle attacks on PGP public key exchanges as vulnerabilities. however, the man-in-the-middle attacks themselves have vulnerabilities ... where either party detects a substituted public key or the possibility of a substituted public key, say some public key communication was missed and got thru w/o being correctly substituted first ... and the end-points find something amiss. this vulnerability (to successful man-in-the-middle attacks) are in addition to out-of-band key fingerprint exchanges (which is semi-analogous to what is being described in original posting) or direct in-person exchanges of public keys.

so an ongoing man-in-the-middle substitution attack on typical PGP public key exchange is fairly massive and expensive ongoing undertaking and involves maintaining the substitution transparency ... and the man-in-the-middle attack is therefor vulnerable to either party becoming aware of the substitution. the man-in-the-middle attack on PGP public key exchange substition is vulnerable to out-of-band key exchange and/or out-of-band key fingerprint exchange.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 08:06:40 -0600
you wrote:
Hello Anne & Lynn,

If you already have a secure channel (i.e., secret and authenticated), it would seem straightforward to use that secure channel to exchange keys and bootstrap a new secure channel.

The problem we were interested in solving is how to bootstrap a secure channel without a pre-existing shared-secret or a pre-existing secure channel that can be used to exchange a secret. The protocol we described does not at any point rely on an out-of-band secure channel to exchange a secret. We were unable to find a prior publication or documented protocol that addresses this problem.


the analogous operation in PGP public key exhange is the out-of-band key fingerprint exchange ... say over telephone or fax ... or even face-to-face. the issue is getting a compact representation that can be easily communicated via some relatively low-band width channel (like telephone voice).

another scenario is analogous to steganography as countermeasure to man-in-the-middle key substitution attack ... send a audio clip of the spoken key fingerprint ... or encode the key fingerprint in a visual graphic ... say jpeg file. the man-in-the-middle key substitution then has to get extremely complex about monitoring all forms of communication ... to maintain transparency regarding the substitution.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 09:11:23 -0600
aka ... the assertion is that the problems of getting public keys loaded into the client public key trust table, protecting the client public key trust table, etc ... are identical problems for both CA/PKI infrastructures and PGP infrastructures.

the difference is that once the PGP public keys are loaded into the client public key trust table ... they can be used directly ... while once the CA root trust public keys are loaded into the client public key trust table ... the fun is just beginning; there are an enormous number of additional business processes in CA/PKI infrastructure ... that each have their own vulnerabilities and threats.

further, the assertion is that the CA/PKI convoluted, complex infrastructure design was never intended for an online environment ... and therefor when used as an alternative to a solution targeted for an online environment, it can be at a severe disadvantage.

The CA/PKI convoluted, complex infrastructure design point was for an offline environment where the relying party didn't otherwise have any recourse to validation information. It was targeted as an electronic version of the ancient letters-of-credit solution from the sailing ship days.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 09:21:51 -0600
aka ... the assertion is that the objective is for a countermeasure to a man-in-the-middle public key substititon attack ... which can be communicated efficiently using some nominal human-oriented, out-of-band method.

one such method is simultaneously ... or in any order ... communicate people's public keys electronicly and also communicate their public key fingerprints by any other method (telephone call, fax, graphic or audio encoding, etc). The public key communications and the fingerprint communications can be either synchronously or asynchronously ... or in any combination.

In shared-secret, symmetric key exchange ... the issue is both the integrity of the key and the confidentiality of the key. In public key exchange, it is only necessary to validate the integrity of the key exchange. The common integrity checks for public key exchange involve some out-of-band process ... and if via nominal human communication mechanism requires some sort of efficient encoding mechanism. Effectively that is what public key fingerprints are targeted at doing.

However, for man-in-the-middle public key substitition attacks to be successful, they have to maintain key substitution transparency across all possible methods of communication which might indicate public key values. So the MITM public key substitution attacks are also vulnerable to convoluted encoding methods of either the public key or the public key fingerprint ... including transmission of such convoluted encodings "in-band" (like alternatives to traditional text transmission ... like audio or graphics). However, for highest level of assurance and integrity ... out-of-band processes are normally recommended.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 09:28:21 -0600
Michael Amling writes:
The problem is certainly ignored in the downloading of web browsers. I've never seen even https offered for downloading a browser, and even if it were, how would the https connection be validated? Granted, you could spend a few dollars and get the browser on CD, but I've never known anyone to do that. And it wouldn't answer the question about which of the five dozen root certificates the browser recognizes are worth trusting.

a possibly unique vulnerability of the CA/PKI public key client trust file ... is that many of the currently deployed CA/PKI infrastructures treat all public keys in the CA public key client trust table as of equal value.

In the PGP public key client trust file ... you actually need to have replaced a specific public key for the entity that impersonation is planned for. In the CA model ... all that is necessary is to have added a CA public key to the table ... and not actually have replaced any public key value ... i.e. an "additional key" exploit/vulnerability different from the substitution key vulnerability. While substititon key attack requires capturing and substituting all possible communication involving that public key .... an "addition key" attack on the CA/PKI infrastructure just involves getting an additional key in the table ... and then being able to strategically "invoke" that specific CA key for validation at some later time.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 13:13:32 -0600
... as an aside ... given the current state of end-point vulnerabilities, it only takes a little bit of effort with some sort of convoluted encoding of a key fingerprint (say in an audio or graphic transmission) to make the effort to maintain a man-in-the-middle attack facade (i.e. transposing all communication between the substituted key and the real key) significantly more expensive than mounting a end-point attack (getting virus & trojans inserted into end-point machine quickly becomes less expensive than attempting to maintain an ongoing man-in-the-middle attack operation).

previous posts in this thread:
https://www.garlic.com/~lynn/2004h.html#48
https://www.garlic.com/~lynn/2004h.html#50
https://www.garlic.com/~lynn/2004h.html#51
https://www.garlic.com/~lynn/2004h.html#52
https://www.garlic.com/~lynn/2004h.html#53
https://www.garlic.com/~lynn/2004h.html#54
https://www.garlic.com/~lynn/2004h.html#55
https://www.garlic.com/~lynn/2004h.html#56

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 15:01:46 -0600
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
I am very sorry for my poor knowledge and hence having many dumb questions. I continue to surmise that without CAs there would be stuff that wouldn't go as fine. For CAs are by definition what common people can trust (whether that's inherently reasonable or not is irrelevant for the current discussion). Now without CAs how are the parties who don't know one another personally going to establish trust between them at all for doing electronic transactions? In particular, how does one know that a public key claimed to be one of a specific firm is really genuine? I don't yet see a good way for that. Note as analogy that even some number of normal (non-electronic) transactions may (under circumstances) need a third party that enjoys the trust of the ones doing the transactions as mediator. (Cf. notary for contracts, civil registry office for marriage, etc.)

ok ... try SSL domain name server certificates and associated infrastructure for something called electronic commerce ... minor reference:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

one of the motivating factors for the SSL domain name server certificates were issues with the integrity of the domain name infrastructure and things like ip-address take-over ... and/or how do i know that the server that i think i'm talking to is really the server that i'm talking to.

so somebody applies to a certification authority for an SSL domain name server certificate. in many cases the certification authority isn't the authoritative agency as to the owner of the domain name. therefor the certification authority has to contact the authoritative agency for domain name ownership ... which is the domain name infrastructure (the very operation that there are concerns with regarding integrity ... giving rise to much of the motivation for SSL domain name server certificates).

so somewhat motivated by the certification authority industry, there is a proposal that when somebody registers a domain name with the domain name infrastructure, they also register a public key. then when the entity applies to a certification authority for a SSL domain name server certificate, they digitally sign the request. Then the certification authority just has to retrieve the naked public key on file with the domain name infrastructure to validate the digital signature on the SSL domain name server certificate request.

currently, there is identification information on file with the domain name infrastructure as to the owner of the domain name. in the existing SSL domain name server certificate application scenario, the applicate would supply identification information and the certification authority then would have to execute a complex, time-consuming, expensive error prone process that attempts to match the identification information supplied with the SSL domain name server certificate application to the identification information on file with the domain name infrastructure.

in the proposal to have naked public keys on file with the domain name infrastructure, the certification authority would just have to retrieve the naked public key from the domain name infrastructure to validate the digital signature on the SSL domain name server certificate application. this replaces a time-consuming, expensive, complex, and error-prone identification process with an simple, inexpensive, straight-forward, strong authentication process.

the catch-22 for the certification authority industry are:

1) if communication related to correct domain name can be authenticated with (naked) public keys on file with the domain name infrastructure, it improves the integrity of the domain name infrastructure, which mitigates the motivation for needing SSL domain name server certificates

2) if there are public keys on file with the domain name infrastructure which can be retrieved by certification authorities for validating digital signatures on SSL domain name server certificate applications, then it would be possible for other people to retrieve on-file public keys also to validate digital signature on other kinds of communication. The ability to distribute on-file public keys related to domain name issues then can totally subsume the need for distributing public keys via SSL domain name server certificates.

some past threads mentioning notary infrastructures:
https://www.garlic.com/~lynn/aadsm5.htm#ocrp Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm5.htm#ocrp2 Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm5.htm#ocrp3 Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm5.htm#ocrp4 Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?

actually some of the notary issues ... don't so much come up with respect to valid certifices ... but come up with respect to real-time proof of intention related to signatures for demonstrating intent, agrees, approves, and/or authorizes what has being digitally signed. This is different that using digital signatures for strictly authentication purposes demonstrating origin. In fact, I've made the assertion that there may be comprimises where there is dual-use of the same key-pair for both authentication and demonstrating intent, agrees, approves, and/or authorizes ... specifically some authentication protocols may send random challenges which the receiver digitally signs (w/o reading) and returns. An attack on digital signatures in the sense of intent, agrees, approves, and/or authorizes ... is send valid information in place of random challenge as part of an authentication prototol. misc pieces of the dual-use thread:

https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)

random past references to electronic commerce and SSL domain name server certificate requirements
https://www.garlic.com/~lynn/aadsm6.htm#terror [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror10 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm7.htm#cryptofree Erst-Freedom: Sic Semper Political Cryptography
https://www.garlic.com/~lynn/aadsm8.htm#softpki Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki3 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki10 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki11 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki12 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki14 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm8.htm#softpki20 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm9.htm#cfppki CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsm10.htm#cfppki20 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm11.htm#36 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda II
https://www.garlic.com/~lynn/aadsm11.htm#43 PKI: Only Mostly Dead
https://www.garlic.com/~lynn/aadsm12.htm#4 NEWS: 3D-Secure and Passport
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#67 Offline Root CA with valid CRL hierachie
https://www.garlic.com/~lynn/aadsm13.htm#25 Certificate Policies (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#26 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm13.htm#33 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm13.htm#35 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm13.htm#37 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#4 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm15.htm#7 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm15.htm#8 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm15.htm#9 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm15.htm#10 Is cryptography where security took the wrong branch?
https://www.garlic.com/~lynn/aadsm15.htm#11 Resolving an identifier into a meaning
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm15.htm#26 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm15.htm#27 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm17.htm#18 PKI International Consortium
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/aepay10.htm#37 landscape & p-cards
https://www.garlic.com/~lynn/aepay10.htm#75 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#76 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#77 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#78 ssl certs
https://www.garlic.com/~lynn/aepay10.htm#79 ssl certs
https://www.garlic.com/~lynn/aepay10.htm#81 SSL certs & baby steps
https://www.garlic.com/~lynn/aepay10.htm#82 SSL certs & baby steps (addenda)
https://www.garlic.com/~lynn/2000e.html#50 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#51 Why trust root CAs ?
https://www.garlic.com/~lynn/2001c.html#8 Server authentication
https://www.garlic.com/~lynn/2001c.html#9 Server authentication
https://www.garlic.com/~lynn/2001d.html#8 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#27 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#33 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#36 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#37 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#39 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#40 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#43 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001e.html#46 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001g.html#2 Root certificates
https://www.garlic.com/~lynn/2001g.html#55 Using a self-signed certificate on a private network
https://www.garlic.com/~lynn/2001h.html#4 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#6 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001j.html#10 PKI (Public Key Infrastructure)
https://www.garlic.com/~lynn/2001k.html#6 Is VeriSign lying???
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2001m.html#41 Solutions to Man in the Middle attacks?
https://www.garlic.com/~lynn/2001n.html#57 Certificate Authentication Issues in IE and Verisign
https://www.garlic.com/~lynn/2001n.html#73 A PKI question and an answer
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002e.html#56 PKI and Relying Parties
https://www.garlic.com/~lynn/2002e.html#60 Browser Security
https://www.garlic.com/~lynn/2002e.html#72 Digital certificate varification
https://www.garlic.com/~lynn/2002g.html#65 Real man-in-the-middle attacks?
https://www.garlic.com/~lynn/2002h.html#68 Are you really who you say you are?
https://www.garlic.com/~lynn/2002j.html#38 MITM solved by AES/CFB - am I missing something?!
https://www.garlic.com/~lynn/2002j.html#59 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002k.html#11 Serious vulnerablity in several common SSL implementations?
https://www.garlic.com/~lynn/2002k.html#51 SSL Beginner's Question
https://www.garlic.com/~lynn/2002m.html#30 Root certificate definition
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002n.html#16 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#10 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#12 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2002p.html#21 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
https://www.garlic.com/~lynn/2003d.html#71 SSL/TLS DHE suites and short exponents
https://www.garlic.com/~lynn/2003l.html#36 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#43 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#45 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#46 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#51 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#54 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#55 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#57 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003l.html#60 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003p.html#20 Dumb anti-MITM hacks / CAPTCHA application
https://www.garlic.com/~lynn/2004b.html#41 SSL certificates
https://www.garlic.com/~lynn/2004h.html#28 Convince me that SSL certificates are not a big scam

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

New Method for Authenticated Public Key Exchange without Digital Certificates

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New Method for Authenticated Public Key Exchange without Digital Certificates
Newsgroups: sci.crypt
Date: Fri, 06 Aug 2004 17:02:51 -0600
Anne & Lynn Wheeler writes:
ok ... try SSL domain name server certificates and associated infrastructure for something called electronic commerce ... minor reference:
https://www.garlic.com/~lynn/subtopic.html


oops, that finger check again ... posts on something called electronic commerce, commerce server, and something called payment gateway:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, next index - home