List of Archived Posts

2005 Newsgroup Postings (03/16 - 04/05)

[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Computerworld Article: Dress for Success?
Thou shalt have no other gods before the ANSI C standard
He Who Thought He Knew Something About DASD
He Who Thought He Knew Something About DASD
Misuse of word "microcode"
[Lit.] Buffer overruns
Making History
He Who Thought He Knew Something About DASD
He Who Thought He Knew Something About DASD
Device and channel
Device and channel
Misuse of word "microcode"
Device and channel
Device and channel
Where should the type information be?
Thou shalt have no other gods before the ANSI C standard
Device and channel
He Who Thought He Knew Something About DASD
He Who Thought He Knew Something About DASD
PKI: the end
Where should the type information be?
PKI: the end
PKI: the end
PKI: the end
PKI: the end
Computerworld Article: Dress for Success?
Using the Cache to Change the Width of Memory
Computerworld Article: Dress for Success?
Public/Private key pair protection on Windows
Stop Me If You've Heard This One Before
Stop Me If You've Heard This One Before
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Where should the type information be?
Where should the type information be?
xml-security vs. native security
xml-security vs. native security
xml-security vs. native security
xml-security vs. native security
xml-security vs. native security
Actual difference between RSA public and private keys?
Using the Cache to Change the Width of Memory
TLS-certificates and interoperability-issues sendmail/Exchange/postfix
Using the Cache to Change the Width of Memory
Using the Cache to Change the Width of Memory
Mozilla v Firefox
Mozilla v Firefox
Mozilla v Firefox
TLS-certificates and interoperability-issues sendmail/Exchange/postfix
Where should the type information be?
System/360; Hardwired vs. Microcoded
Mozilla v Firefox
Mozilla v Firefox
Where should the type information be?
System/360; Hardwired vs. Microcoded
Mozilla v Firefox
System/360; Hardwired vs. Microcoded
Mozilla v Firefox
Mozilla v Firefox
TLS-certificates and interoperability-issues sendmail/Exchange/postfix
Mozilla v Firefox
Graphics on the IBM 2260?
Mozilla v Firefox
Mozilla v Firefox

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 09:26:07 -0700
CBFalconer writes:
You were on the right track, but late. Unfortunately Bowles & Co. had done a fouled job of it some time earlier, and Brinch Hansen had pointed the way, but failed to implement the application (sequential Pascal) language correctly (at least from the Pascal communities viewpoint). Too bad.

the los gatos vlsi lab had been using metaware to develop language technology for a number of things ... customized grammers for some specialized things ... language for developing design tools, applications, specialized database technology targeted at integrating logical and physical design, etc.

the language/compiler was first released to customers as pascal "iup" (installed user program) and eventually evolved into vs/pascal (available on both mainframe and aix).

when the language work was somewhat first starting off ... i was also involved in some of the system/r (original relational/sql) stuff in bldg. 28
https://www.garlic.com/~lynn/submain.html#systemr

as well as some of the bldg.29 (los gatos lab) stuff for doing a different kind of database for various uses (including attempting to integrate logical and physical chip design). some of that is the precursor to the knowledge stuff i currently use for glossaries, taxonomies and the rfc index:
https://www.garlic.com/~lynn/index.html

reference to metaware's tws manual
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?

other metaware references:
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#61 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC

other vs/pascal refs:
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/2000.html#15 Computer of the century
https://www.garlic.com/~lynn/2002d.html#18 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#21 TSO alternative
https://www.garlic.com/~lynn/2004c.html#25 More complex operations now a better choice?
https://www.garlic.com/~lynn/2004d.html#41 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004g.html#27 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004k.html#34 August 23, 1957
https://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
https://www.garlic.com/~lynn/2004q.html#33 Listserv for TCPIP
https://www.garlic.com/~lynn/2005.html#51 something like a CTC on a PC
https://www.garlic.com/~lynn/2005b.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#63 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#65 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005d.html#30 The Mainframe and its future.. or furniture

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 12:17:41 -0700
Anne & Lynn Wheeler writes:
i had put together a proposal in the 82/83 time-frame to address most of these issues ... but it became involved in large corporation politics and got way too much attention and one characterization would be that it reached (blackhole) critical mass and imploded. there was some study of various (portable?) languages that could be used for operating system implementation and their associated integrity characteristics. I did a couple demos of taking existing kernel components (implemented in assembler) and redesigning and recoding them from scratch in (enhanced) pascal.

reference to an internal conference i held at research in march of '82 on the theme
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party

there is also mention of review that was held for a proposal by some people in the valley for the corporation to build some of these new workstation machines. there were a number of internal groups that were claiming they were already doing something better, so the offer was declined, and people making the proposal had to go off and do their own startup (three letters, 1st is an S).

minor bibliography from one of the documents later in '83:
The C Programming Language, Kernighan and Ritchie, Prentice-Hall, 1978

A comparison of Language C and Pascal, Allen xxxxxxxx, IBM Cambridge Scientific Center, G320-2128, Aug. 1979

A comparison of Modula-2 and Pascal/VS, A. xxxxxxxx, IBM Palo Alto Scientific Center, March 1983

Comparison of Various Pascals, Internal IBM document, Diane xxxxxxxx, 21G/032, Boca Raton, Sept. 1982

Computer-Mediated Communication Systems, Kerr and Hiltz, Academic Press, 1982

A Dump Reader (DR/I), Internal IBM document, Marc xxxxxx, YKTVMV/xxxxxx

The Evolution of User Behavior in a Computerized Conferencing System, Hiltz and Turoff, Comm. of ACM, Nov. 1981

FAPL Language Manual, Internal IBM Document, David xxxxxxxx, Research Triangle Park, Raleigh, Dec. 1982

Introduction To The PL.8 Language, Internal IBM document, Martin xxxxxxxx, YKTVMX/xxxxxxxx, May, 1979

MetaWare(tm) TWS User's Manual, Franklin L. DeRemer, Thomas J. Pennello, Santa Cruz

The Network Nation -- Human Communication via Computer, Hiltz and Turoff, Addison-Wesley, 1978

Invitation to ADA & ADA Refernce Manual, Harry Katzan, Jr., Petrocelli Books, July 1980

The LSRAD Report, SHARE Inc., December 1979

Modula-2, Niklaus Wirth, Springer-Verlag, 1982.

Organic Design for Command and Control, Col. John Boyd (ret), talk given in May, 1983 at IBM Los Gatos Lab.

Paltry, Internal IBM computer conference system, Ron xxxxxxxx, YKTVMV/xxxxxxxx

Pascal/VS Language Reference Manual, IBM Corp. SH20-6168, April 1981

Pascal/VS Programmer's Guide, IBM Corp. SH20-6162, April 1981

Pascal User Manual and Report, Jensen and Wirth, Springer-Verlag, 1974

PL/S III Language Reference Manual, IBM Corp. ZZ28-7001, March 1980

Program Verification using ADA, A.D. McGettrick, Cambridge University Press, 1982

Programming Language/Advanced Systems (PLAS) Specification, T. xxxxxxxx, IBM PL/AS Development, January 1983.

Proving Concurrent Systems Correct, Richard Karp, Stanford Verification Group Report No. 14, November 1979.

REX Reference Manual, Internal IBM document, Mike xxxxxxxx, VENTA/xxxxxxxx.

SCOPE 3.1.2 Reference Manual, Control Data Corporation, 60189400, Oct. 1968

Smalltalk-80 - The Language and its Implementation, Goldberg and Robson, Addison-Wesley, 1983

Software Engineering Notes, ACM SIGSOFT, Vol 6, no. 4, August 1981

Source Level Debug User's Guide, Internal IBM Document, xxxxxxxx, et al. (PLKSB/xxxxxxxx), Jan. 1983

Stanford Pascal Verifier User Manual, Stanford Verification Group Report No. 11, March 1979.

STL-Debugging-Tool User Manual, xxxxxxxx & xxxxxxxx, IBM internal document, 1982

System Product Editor, Command and Macro Reference, IBM Corp. SC24-5221, March 1982

Systems Inventory Manager, Bob xxxxxxxx, IBM San Jose GPD TR03.038, August 1977.

Theory of Compiler Specification and Verification, Wolfgang Polak, Stanford Program Verification Group Report No. 17, May 1980.

TSS Time Sharing Support System PLM, IBM Corp. GY28-2022, Sept. 1971.

Trends in the Design and Implementation of Programming Languages, William Wulf, Computer, January 1980.

VM/Interactive Problem Control System Extension, IBM Corp. SC34-2020, Sept. 1979.

Why Pascal is Not My Favorite Programming Language, Bruce Kernighan, Bell Laboratories, Computer Science Report No. 100,


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 12:32:03 -0700
Anne & Lynn Wheeler writes:
Organic Design for Command and Control, Col. John Boyd (ret), talk given in May, 1983 at IBM Los Gatos Lab.

and, of course, some of postings mentioning john boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

and various other web pages that also mention boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Computerworld Article: Dress for Success?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computerworld Article: Dress for Success?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 17 Mar 2005 08:55:42 -0700
dnobles@ibm-main.lst (David Nobles) writes:
There was an article in the Washington Post a couple of years ago where the author stated you could correlate the decline of the US with the decline in dress standards. In essence, that sloppy dress = sloppy attitudes = sloppy work ethics.

I personally tend to fall between the author and Bob. I definitely prefer casual dress, live with business casual on most sites but agree there are times to dress up and not just for interviews.

Regardless of my views I need to take in account those of my intended audience.


similar assertions of been made regarding language use.

note, however, both dress and language use have tended to be correlated with high degree of team-related compatibility, interaction, uniformity, etc.

one of boyd's themes in the early 80s
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

was significant amount of the american corporate culture was heavily influenced by organization structure and training from ww2 (significant percentage of corporate executives having gotten their early training in large organization management during ww2).

The issue at the start of ww2 was that there was a huge and rapid build-up in personnel but there were few numbers with significant experience. this led to trying to create a rigidly top-down control structure ... attempting to create uniform, interchangeable units (at the low level because of their lack of experience) that operated in unison under direction of the few individuals available with any experience.

Boyd contrasted this with Guderian's orders for the blitzkrieg which were directed at giving the people on the spot the greatest independent (tactical) autonomyy and decision latitude. the contrast was that Guderian's forces were supposedly had much more experience in their craft. One of points that Boyd used in support of his thesis was that the german army was something like 3 percent officers compared to something like 15-17 percent officers in the american army. The claim was that the significantly larger percent of officers was necessary in order to maintain the ridid, top-down, heavily structured control operation. Part of this was the theme of Boyd's talk on Organic Design for Command and Control.

In any case, the large organization management command and control at the start of WW2 was specifically oriented towards operations involving large number of people that had no experience at what they were doing ... and therefor it was necessary to have the minimum amount of local autonomy and maximum amount of uniformity and top-down, rigid, structured control.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 17 Mar 2005 10:03:08 -0700
"Hank Oredson" writes:
At one place I worked they really wanted me to have the title "Senior Computer Scientist". Didn't argue with them, the culture needed that title to explain what it was that I was expected to do. There was little "science" involved in what I did, and quite a lot of cat herding. A better title would have been "Cat Herder" or "Advisor to the Less Senior Peons". At a couple other places I did essentially the same work but was called "Principal Application Analyst" and "Senior Member of the Technical Staff". That last one seemed to make sense, since I was over 50 and not doing managment at the time :-)

for most of my career, i managed to get by with no title on my business card (although it was one of the first business cards to have an email address) or at most 'staff member'. for a time, i worked for somebody that the ceo had box of business cards made up for him that had a title of corporate gadfly.

we were once on a business trip overseas visiting a company/culture that was very sensitive to rank and had carefully orchistrated seating on both sides of a long table. after the morning coffee break, he switches seats with me (which places me opposite their ceo).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main
Date: Fri, 18 Mar 2005 08:25:11 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
1. DASD need not be disk, and some, e.g., 2301, 2321, have not been. I suspect that eventually nonrotating DASD will displace disks.

2. I'd be worried if drives were failing frequently, even if the failures were nondisruptive.

3. These days people have huge DASD farms, so even with higher reliability I'd expect them to see more failures unless the reliability improvements keep pace with the growth.


in the early 80s, there was a 1655 (sjr/bldg. 28 had a couple, they were from another vendor) ... which emulated 2305 fixed-head ... and was nonrotating ... dram, volitile, so primarily targeted as paging device.

the story i was told was that they were dram that had failed normal memory acceptance tests in various ways ... but were suitable for use as an i/o device using various compensating processes by the i/o device controller.

a couple years ago, the mtbf failure numbers i saw for commodity drives had gone from tens of thousands of hrs to nearing a million hrs. some number of vendors then started packaging raid arrays of such devices ... so even such failures were masked.

numerous past dasd related posts:
https://www.garlic.com/~lynn/submain.html#dasd

misc. post about work with the dasd engineering lab (bldg. 14) and dasd product test lab (bldg. 15)
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 18 Mar 2005 08:48:34 -0700
Anne & Lynn Wheeler writes:
a couple years ago, the mtbf failure numbers i saw for commodity drives had gone from tens of thousands of hrs to nearing a million hrs. some number of vendors then started packaging raid arrays of such devices ... so even such failures were masked.

... aka improved by a factor of 10-20 times ... however you have some configurations where the number of drives have gone up by a hundred times.

for some topic drift, there typically is an assumption with raid that the variance/distribution in mtbf is at least some minimum value .... you hopefully don't have specific failure modes that once one drive fails there is a high probability that you will have other drive failures within a short period of time (in the past there has been specific cases where that has been known to happen). even if the mtbf is a million hrs ... raid would still have a problem if there is a failure mode that clusters a large number of drive failures at exactly a million hrs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Misuse of word "microcode"

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Misuse of word "microcode"
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 18 Mar 2005 09:02:21 -0700
Roger Ivie writes:
You missed it. He was pointing out that in current AS/400 systems, the microcode is executed by a PowerPC.

which is what the fort knox and other projects from 1980 were going to do ... the objective was to take the myriad of corporate microprocessors and transition them all to 801 (controllers, devices, low-end 370s, etc). in some sense, the whole power thing was an outgrowth of that. romp was originally targeted as the processor for an opd displaywriter follow-on. when that project got killed, it was decided to retarget the platform to the unix workstation market. they hired the company that had done the at&t unix port for the pc (pc/ix) to do one for romp ... which was then announced as pc/rt and aix. rios/power chips were the follow-on for pc/rt (romp).

somerset was then a spawned to do single chip 801/powerpc ... that also did things like change some amount of the cache semantics to support cache coherency in smp shared-memory configurations (aka rios/power cache semantics were strictly non-cache coherent).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 18 Mar 2005 10:44:54 -0700
Lawrence Statton N1GAK/XE2 writes:
I respectfully disagree.

I think that is an over-generalization.

It has nothing to do with whether the customers are internal or external, or whether there is money involved, but the variance among the customer needs, and the organizations' dedication to the product.

As to the "make any money" part: Some organizations, including the one I work for, *do* apportion the cost of internal software across the various departments. While it all comes out of the same pie at the top of the organization, if the operations people want the engineering group to implement some special feature, they will have to allocate some fraction of this year's budget to that feature, and will have to consume so many fewer paper clips, or whatever other cutbacks they need.


lots of organization have oscillated between the two spectrums .... in-house operations required to provide significant competitive advantage to various business units as opposed to being a distraction from the core business focus and don't provide sufficient competitive advantage to justify the distraction. this extends all the way down to there being a signficant culture clash between the core business people and people in various in-house services. in addition, numerous corporate cultures feel more at ease with contractual relationships (with things like penalties) which can be a major problem when dealing with internal organizations.

purely internal operations have typically been viewed as cost centers and it has been frequently difficult for them to demonstrate return on investment. this is behind a great deal of the outsourcing that periodically goes on.

i got involved in a different approach back as an undergraduate. boeing was spinning off much of their dataprocessing into BCS ... to operate as an independent business operation ... being able to market to both "internal" customers as well as "external". spring break i was conned into giving one week class to the initial bcs technical staff (bcs was still getting off the ground and was still working on absorbing/digesting existing datacenters).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Making History

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Making History
Newsgroups: alt.folklore.computers
Date: Fri, 18 Mar 2005 10:56:32 -0700
some possibly archeological interest; Making History
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

part of the above refers to lots of postings in this newsgroup.

there is slight misprint ... I joined CSC (as in cambridge science center as opposed to any other CSC) ... before transfering to SJR after seven years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main
Date: Fri, 18 Mar 2005 12:15:01 -0700
ptduffy042@ibm-main.lst (Peter Duffy) writes:
Are you refering to situations with RAID5? As I understand how RAID5 works one disk isn't useful in recovering anything.

frequently you have N+1 drives in RAID5 (for no single point of failure ... any single failed disk can be reconstructure from the N remaining disks) ... there have also been things like 32+8 raid disk configurations for more robust levels of recovery.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main
Date: Fri, 18 Mar 2005 12:31:44 -0700
ptduffy042@ibm-main.lst (Peter Duffy) writes:
Are you refering to situations with RAID5? As I understand how RAID5 works one disk isn't useful in recovering anything.

the issue is like some of those crime shows where the police recover segment fragments and garner interesting pieces of information from the fragments ... there is no intention of implying that a production system/volume could be reconstructed from a single drive.

RAID5 is a variation on the N+M ... frequently N+1. In the dedicated parity drive(s) configuration ... the parity drive(s) doesn't get accessed except for writes and failures. RAID5 rotates the records around so that parity records are spread across all drives. In a transaction oriented system where individual transaction records can be 1/N of a block ... then all transaction traffic can be spread across N+M arms ... as opposed to just N arms ... and individual arms independently scheduled for transaction read traffic.

transaction writes in raid5 can be a little more complicated. a typical scenario is to first read both the individual transaction record to be replaced and the parity record for that block. the parity record is then recomputed w/o the contents of the record being replaced ... and then recomputed again with the contents of the new/replaced record. then both the new record and the parity record are written (to their respective disks). of course when you are replacing all N records in a raid5 block ... then it is possible to directly compute the new M parity records and then perform the N+M writes w/o first having to do any reads.

all of this is further complicated by possible failure scenarios where there is something like a power failure ... and some subset of the records have been written but not all. special procedures are necessary for recovery for partial writes caused by various types of interruptions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Device and channel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Device and channel
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 18 Mar 2005 13:49:41 -0700
ronhawkins@ibm-main.lst (Ron and Jenny Hawkins) writes:
ESCON runs at 200Mb/sec which translates to 20MB/sec. The standards definition of FCP is very strict, with rates defined at 1Gb, 2Gb, 4Gb and soon 10Gb. This is not advertising, it is how FCP is defined and is the common usage in the industry - no one buys a 200MB/sec HBA or a 100MB/sec switch - they buy 1Gb or 2Gb.

Interesting enough, the Gb in this case is binary, meaning 1073741824 bits/sec. FCP also uses a 10 bit byte, which means it is a 107MB/sec, 215MB/sec or 429MB/sec protocol. Using 100, 200 and 400MB/sec is simply rounding and does not represent the full frame rate.


escon was also half-duplex (maintain compatibility with bus&tag?) and frequently is rated at 17mbytes/sec (aggregate) ... the half-duplex characteristic then also made effective thruput sensitive to distance latency.

escon had been laying around pok for quite a while before it got out (some of it possibly dating back to when my wife did her stint in POK in charge of loosely-coupled architecture).

one of the austin engineers had taken much of the escon spec, turned it into full-duplex, up'ed its thruput by about 10% to 220mbits/sec, and converted to much less expensive optical drivers. this was available as "SLA" (or serial link adapter) on original rs/6000.

then he started work on upgrading it to 800mbits/sec. at that time my wife and i had been doing some stuff with LLNL with respect to their filesystem in conjunction with cluster operation
https://www.garlic.com/~lynn/subtopic.html#hacmp

which eventually became a product called unitree (we also spent some time with ncar on mesa archival and some number of other locations that had developed cluster environment filesystem support).

in that time-frame LANL was doing work to standardize cray channel as HiPPI, LLNL was trying to standardize some serial-copper technology they were involved in as fiber channel standard, and slac was backing fiber SCI (scalable coherent interface).

we helped convince the SLA engineer to give up the 800mbits/sec SLA and go to work on FCS standards commitee ... where he quickly became the editor of the FCS standards document. FCS started out being 1gbit full-duplex (simultaneous in each direction, 2gbit aggregate, as compared to escon's 200mbit half-duplex).

full-duplex operation not only provided twice the aggregate thruput (compared to half-duplex operation), but most of the full-duplex protocols also involved asynchronous operation ... which significantly mitigated any long-haul latency that might be involved in some FCS deployments.

later some number of POK channel engineers become involved in the FCS standards effort and there was lots of contention (mailing list as well as meetings) were they were attempting to map traditional IBM half-duplex device I/O operation on top of native asynchronous operational environment (for one thing, state that was expected is be preserved in an half-duplex environment is prone to being reset in a full-duplex, asynchronous environment).

total SLA business trivia ... we had an interesting business problem with SLA ... trying to convince other vendors to incorporate SLA hardware into their products (and allow interoperability with rs/6000). Turns out that there is an internal business process between locations about transfer of pieces (in this case the SLA chips) that resulted in an N-times cost multiplier.

Unfortunately from the plant producing the SLA chip to outside vendors there was three internal location transfers ... with each location following the internal business process transfer rules which resulted in a 3*N-cost multipler for SLA chips to outside companies.

random past posts mention escon, ficon, fcs, sci, hippi, etc:
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#30 Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/2000c.html#22 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#14 FW: RS6000 vs IBM Mainframe
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001e.html#22 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#7 Bus & Tag, possible length/distance?
https://www.garlic.com/~lynn/2002e.html#26 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#31 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#11 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002j.html#78 Future interconnects
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002l.html#13 notwork
https://www.garlic.com/~lynn/2002m.html#20 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#73 VLSI and "the real world"
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2002o.html#11 Home mainframes
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003h.html#3 Calculations involing very large decimals
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Device and channel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Device and channel
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 18 Mar 2005 18:55:13 -0700
bblack@ibm-main.lst (Bruce Black) writes:
On FICON, all the CCWs in a chain are sent to the control unit in one frame (very long CCW chains can be sent in multiple frames, but that is unusual). Then the data blocks for each CCW are sent or received. So the channel protocol is quite different and not just sending the ESCON protocol over FCP. I suppose it would be more accurate to say that FICON is "CCWs over FCP", where CCWs are the channel protocol used since S/360 days. --

note that the ficon description is nearly identical to the HYPERchannel remote device adapters (A51x) boxes starting around 1980 or so.

when a couple hundred people from the IMS group in STL/bldg90 were remoted to a bldg about 10miles away ... they looked at the performance of remote 3270s ... and decided to go with HYPERchannel and "local" 3270s instead. I got to write the device driver to download the CCWs into the memory of the A51x boxes ... which had loads of attach (local) 3270 controllers.

the configuration had HYPERchannel A220s on the local mainframe channels and then pairs of HYPERchannel A71x boxes with T1 (1.5mbit) link and finally some number of A51x boxes at the remote site.

The people that were remoted didn't see any observable 3270 response characteristics between real local 3270s and HYPERchannel local 3270s (over T1 link). However, there was a side benefit ... getting the local 3274s off the local mainframe channels and replacing them with HYPERchannel A22x adapters improved overall system thruput about 10-15%. It turns out that the HYPERchannel A22x had significantly lower channel busy time (for the same operation) than did real 3274 controllers directly attached to mainframe channels (getting the real 3274s directly off the mainframe channels significantly lowered channel busy time for doing 327x operations and improved overall system thruput).

There was a problem tho using A510 boxes for disk operations because of the timing dependeing nature of doing search-id/tic operations. Finally NSC came out with the HYPERchannel A515 channel adapter box that was used by NCAR for their cluster filesystems. They sort of used the IBM mainframe as a hierarchical filesystem controller. Various supercomputers on the HYPERchannel network could make a request for some data. The ibm mainframe would stage the data to disk (if no already) and then download the dasd channel program into the memory of an HYPERchannel A515 box ... and then return a pointer to the channel program. The supercomputer would address the specific A515 invoking the specified channel program ... resulting in the data read/writes being directly between the supercomputer memory and DASD (w/o having to pass thru the memory of the ibm mainframe).

numerous past posts on HYPERchannel, HSDT, etc.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

for some topic drift ... the original mainframe tcp/ip support could consume a full 3090 cpu getting about 44kbytes/sec thruput. I added RFC1044 (NSC adapter) support to tcp/ip and in tuning at cray research was getting 1mbyte/sec sustained between a cray and a 4341-clone (using only a modest amount of the 4341).

misc rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

some other random HYPERchannel drift ... for original HYPERchannel driver i had done for STL/bld.90 ... if there was an error that couldn't recover within the driver ... a channel check i/o interrupt was simulated. some years later ... something like a year after 3090 first customer ship ... some ras guy from pok tracked me down. turns out the industry erep reporting service was showing something like five times the expected number of channel checks for customer 3090 operation. It turned out they had tracked it down to some small number of HYPERchannel drivers simulating channel checks (for things like unrecoverable HYPERchannel channel extender over telco T1 links). The point in simulating channel check was to kick off various operating system retry operations. After some analysis, I determined that simulating IFCC (interface control check) instead of channel check would result in effectively the same retry operations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Misuse of word "microcode"

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Misuse of word "microcode"
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 18 Mar 2005 19:13:19 -0700
"del cecchi" writes:
Gee, you didn't mention the other fiascos, like workplace os and pink. Or Bach and Beethoven. Ahh the lies that were told, the foils that were pitched.

Gee my old lasalle ran great. Those were the days.


they've created a reputation for me to uphold
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Device and channel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Device and channel
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 19 Mar 2005 08:02:08 -0700
cruff@ruffspot.net () writes:
The NCAR Mass Storage System (MSS) is an archive server abused by being treated as a shared file system server. The mainframe (now an used early model z/8xx something, I can't ever remember the model number even though I walk past it several times a week) runs the PL/1 software that manages the operation of the system. The user submits a request to a local daemon, which contacts the management software on OS/390 (we've not quite gotten to z/OS yet). The management software mounts a tape if necessary, allocates data path resources and gives the location of the data on the device (disk/tape) to the daemon on the user's system, which then manages the transfer by directly building the CCW programs on the remote system and driving the storage device directly.

i think it was the early 80s that had some fed. gov. legislation pass regarding technology re-use and commercialization ... trying to promote technology transfer into the commercial sector (some of the early internet commercialization may have benefited from such legislative action).

llnl lincs -> unitree was another such effort ... we contributed some of the funding and support for that activity.

another was ncar's mesa archival (attempted?) spin-off ... which we also spent some amount of time with. some place we may have some of their old market studies and business plan. big part of mesa archival effort was porting the mainframe code to a unix platform.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Device and channel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Device and channel
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 19 Mar 2005 08:15:15 -0700
Anne & Lynn Wheeler writes:
llnl lincs -> unitree was another such effort ... we contributed some of the funding and support for that activity.

another was ncar's mesa archival (attempted?) spin-off ... which we also spent some amount of time with. some place we may have some of their old market studies and business plan. big part of mesa archival effort was porting the mainframe code to a unix platform.


and sort of the first spin-off in this genre was LANL's. they had a similar ibm mainframe as a dasd/tape controller serving supercomputers which general atomics was marketing as datatree ... which then somewhat led to the choice of unitree (aka unix) name for llnl's lincs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Sat, 19 Mar 2005 10:25:05 -0700
"Edward A. Feustel" writes:
I agree with you that many of the problems associated with computers and debugging might be eliminated if data was either described by descriptors or typed with tags, or both. I wrote a paper in 1973:

E.A. Feustel, "On the Advantages of Tagged Architectures",

I.E.E.E. Transactions on Computers, Vol. 22, pp. 644-652(Jul 73).

In which I advanced a number of arguments for doing this and for providing "Self-Describing Data". Note that XML seems to have taken up this idea for data to be transferred between machines.


fs (future system) was an effort that was targeted at all sorts of complex hardware descriptors ... there was some analysis that the worst case could have five levels of indirection (in the hardware) for each argument. one of the nails in the fs coffin was some analysis that if you took the highest performance technology available at the time from 195s for building an FS machine, the resulting application thruput would be about that of 370/145 (possibly 10:1 or 20:1 slow-down). in any case, fs was killed before even being announced or publicized. specific refs:
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

misc. other references
https://www.garlic.com/~lynn/submain.html#futuresys

to some extent, the start of 801/risc effort in the 70s was born to try and do the exact opposite of the FS effort ... and all complexity was moved to the (PL.8) compiler. misc. past 801
https://www.garlic.com/~lynn/subtopic.html#801

the precursor to XML was SGML
https://www.garlic.com/~lynn/submain.html#sgml

and before that was GML which was invented at the science center in 1969:
https://www.garlic.com/~lynn/subtopic.html#545tech

g, m, & l are the last name initials of three of the people at the science center ... but officially was "generalized markup language" ... and an implementation was added to the cms "script" command for document formating (script had originally started off with dot-command formating controls) ... and gml allowed the separtion/independence of the specification of the document components from the specification of the formating of those compoents ... and some applications started using the specification of the document components for things other than formating. however there were other efforts at the science center along the lines of self-describing data ... one involved years of 7x24 performance monitoring data.

total trivia, the w3c offices are now only a couple blocks from the old science center location at 545 tech sq.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 19 Mar 2005 13:18:22 -0700
CBFalconer writes:
Sixty plus years ago I learned not to take a healthy swing at a nail held between thumb and forefinger. This involved various noises, applications of cold water, etc. Today I cannot possibly remember any of the specific instances, but I do retain the lesson reasonably well. Since you distrust this mechanism, please go forth, grasp a nail firmly with the left hand, raise a hammer high and smash the nail into place with at most two blows.

in my youth i did learn to tap/slam an 8penny box ... it took a little longer to get to the point where i could tap/slam a 16penny common (20oz instead of 16oz hammer helped some).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Device and channel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Device and channel
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 19 Mar 2005 16:52:05 -0700
cruff@ruffspot.net () writes:
Apparently Mesa Archival had a very short life. The NCAR side of the spin off happened just before I started to work there. It was part of the commercialization of federally funded creations. They (Mesa) may have had one customer, but I don't recall if they ever delivered any successful product. Of course, that could all be wrong too, as I only have second hand commentary to go by. I did recently need to recycle a bunch of old green bar printouts of very early MSS code that had been used in support of the spinoff activities.

At NCAR we are still in the progress of slowly evolving the MSS by migrating the OS/390 resident PL/1 functionallity to C/C++ code on POSIX systems. The next large piece being migrated is the ISAM-resident Master File Directory (MFD) and associated code, which is being retargeted to use DB2 running on Unix. The idea is to be able to run on a wide variety of hardware with POSIX compatible operating environments so that the archived data is not stranded if vendors go out of business.


well it is now been over 15 years since mesa archival was going to port the code from mvs to a unix platform.

some meetings and discussions with people at Mesa Archival had occurred in '88

middle layer was what we started out calling what became 3layer architecture, misc. refs
https://www.garlic.com/~lynn/subnetwork.html#3tier

minor ref.
https://www.garlic.com/~lynn/99.html#201 Middleware - where did that come from?
https://www.garlic.com/~lynn/99.html#202 Middleware - where did that come from?
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#17 middle layer

and from long ago and far away ...

Date: Wed, 8 Mar 1989 09:16:22 PST
From: wheeler
To: xxxxxxxx

xxxxxxxx, I'm planning on being on the east coast the 1st 2-3 days of next week. On monday we have a NSF meeting. We then will be giving the middle layer software and hardware pitch to several people. We are also looking to meet with xxxxxx (xxxxxx's assistant) to follow-up on details of the middle layer/"file server" presentation. We are also looking to meet with xxxxxx (who is on the ES file server task force) to at least provide some degree of coordination.

The original intention was to spend the rest of the week in San Jose on the gpd/awd "system" file server task force (which is going to run somewhat in parallel with the GPD dasd task force that is currently going on). However, we now find that most of those (gpd) people will be on the east coast for the week on some (other) ES task force.

Because of the ES task force, we are now attempting to get xxxxxx's schedule set-up for a visit to Seattle so we can finally follow-up on the xxxxxx software for the middle layer strategy. While we are in Seattle, we have requests to meet with both Boeing's SCI group (super computer intergration) and some people in BCS (which we would follow-up on). This is assuming that we can get it coordinated with xxxxxx.

The week following next (3/20th) will probably be spent all week in San Jose covering both the DASD taskforce as well as the fileserver taskforce. I'm hoping to get Almaden server people involved in this as well as getting the total middle layer strategy on the table for investigating areas of mutual interest.

In conjunction with both the middle layer strategy and high-speed interconnect we also have pending requests for follow-up meetings with SLAC (on the SLAC/FERMI/CERN connection) and Los Alamos (on the Mesa/archival and middle layer subjects).


... snip ... top of post, old email index, HSDT email

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 21 Mar 2005 11:51:51 -0700
John_Mattson writes:
This reminds me of something that happened in the early 3380 days. As I remember an installations nice new drives kept dying at a certain time of the day, and they finally traced it to the delivery dock. The doors to the data center were closed when the trucks drove in and out, but when they opened after delivery there was still enough diesel soot in the air to kill the drives. So, have you checked YOUR environmentals lately?

there is the (famous?) folktale of the cdc6600 powering down at berkeley the same time every week. turns out the whole machine room would thermal (tuesday at 10am or some such) .... turns out that coincided with grass watering schedule and a class break (the typical flushing water use problem).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

He Who Thought He Knew Something About DASD

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: He Who Thought He Knew Something About DASD
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 21 Mar 2005 12:09:16 -0700
edgould@ibm-main.lst (Ed Gould) writes:
Not DASD but semi related. We had a T1 line that would go bonkers at various times of the day. Turns out that the Blankity blank NYers didn't use insulated wire. Everytime the freight elevator would go by the T1 would drop. Grrrrrr I had IBM tied up for weeks looking at traces etc.. I was *NOT* a happy camper.

in the reference about supporting a couple hundred IMS people moving from bldg.90/STL to a location about 10 miles away:
https://www.garlic.com/~lynn/2005e.html#13 Device and channel

... a similar configuration was installed when the FE IMS service people in boulder were being relocated to a bldg. across the street (from the bldg. they had been in that housed the datacenter). Bascially HYPERchannel (channel extension) over T1 link ... but in this case it was using infrared modems on poles on the top of the respective bldgs (aka effectively providing channel extensive so that people would be using and experiencing local 3270 response ... as opposed to what they would get if they were to be subjected to remote 3270 operation).

there was concern that because of the weather in the boulder area, that it might adversely affect the signal quality. it turns out to not being as bad as people feared. in a white-out blizzard when nobody was able to make it into work ... we started seeing slight elevation in the bit-error rate.

this was one of our HSDT efforts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

on so we had multiplexer on the T1 link with (fireberd) bit-error testers running constantly on a 56kbit side-channel monitoring signal/transmission quality. i had written a program that ran on a pc ... and simulated a terminal to the serial port of the bit-error tester ... logging all the data and then reducing and plotting the information.

for a little topic drift, there was a different problem that resulted in signal loss. turns out that the infrared modems have a fairly tight footprint and it didn't take much to get the modems out of alignment. It wasn't wind nor rain that was identified as causing the problem. It turns out that in the course of the day ... the sun unevenly heated the sides of the building ... causing first one side to expand/contract and then the other side. This asymmetrical expansion/contraction of sides of the building resulting in the poles (that the modems were mounted on) to lean enuf during the course of the day (to get out of alignment).

from some turbo pascal archive, long ago and far away (all of this was before snmp) ....
{$V+,R+,B-,C-,U-} {note: the C- & U- aviods losing type-ahead} (* FIREBERD minimize ERROR entries in cases of prolonged high-error rate conditions. Calculate ERRORs/sec Make no more than one entry in ERROR per 15 minutes unless ERRORs/sec change by more than 50%.

minimize ERROR entries for sync lost/sync acquired loops.

ULTRAMUX recognize alarm messages define status flags for various alarm messages when sync. is lost, include fireberd information in screen

COM2 asynch interrupt check for Asynch card interrupt ... if not asynch, restore status/regs and goto saved IRA (i.e. cascaded IRQ4 interrupt routines) *)


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI: the end

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI: the end
Newsgroups: sci.crypt
Date: Mon, 21 Mar 2005 13:23:01 -0700
"tomicmilan@gmail.com" writes:
Today, PKI rely on primary numbers. Humans don't know how to calculate primary numbers. We humans don't know nothing about primary numbers. We can only take all numbers in a range (from 10000000000 to 999999999999) and validate if some of those numbers are primary nums.

One day someone figure out how primary numbers work: it will be the end of PKI. The end of SSL, X.509 certificates, digital signature and encryption as we know it.

Are there algorithms for digital signature and encryption which doesn't require private/public key pairs? That doesn't rely on primary numbers? I know for HMAC, enything else?


PKI is a business process that makes use of asymmetric key cryptography (aka something is used for encryption, something else is used for decryption and it is difficult to determine the something used for encryption based on either the encrypted information and/or what is used for decryption).

the fundamental business process for PKI is taking a asymmetric key pair, that one of the keys is consistently kept private and the other key is allowed to be public.

assuming that the fundamental business processes for protecting and use of the "private key" are met, then a relying party may infer from the verification of a digital signature, something regarding the access and use of the "private key" as satisfying some form of 3-factor authentication.

besides rsa ... fips186-2 also defines ec/dsa for digital signature algorithm ... see nist digital signature standard reference for more information:
http://csrc.nist.gov/cryptval/dss.htm

within the context of 3-factor authentication
something you know
something you have
something you are


... misc. other posts
https://www.garlic.com/~lynn/subintegrity.html#3factor

most deployed PKIs don't require people to memorize the private key, so that eliminates something you know authentication. also most deployed PKIs don't make the private key dependent on some biometric characteristics ... which then rules out something you are authentication. fundamentally that leaves the "private key" being something you have authentication. Since a "private key" is just a sequence of numbers ... they are somewhat prone to replication and once that happens, it will be difficult to assert something you have authentication. it is the business process of public/private key infrastructure that takes asymmetric key cryptography and creates the requirement that one of the asymmetric key pairs is to be uniquely kept private.

as mention in other contexts:
https://www.garlic.com/~lynn/aadsm19.htm#1 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#2 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#3 Do You Need a Digital ID?

one of the issues from the early 90s in associated with x.509 identity digital certificates was that the descriptions frequently concentrated on the process of a certification authority creating a digital certificate and would totally gloss over the business process foundation for public/private key infrastructures (establishing the convention that one of an asymmetric key pair is to be kept uniquely private).

the business process foundation for public/private key infrastructure and the convention for keeping one of the keys uniquely private, in fact is totally independent of digital certificates. It is possible to take an existing something you know (pin/password) authentication infrastructure and substitute the registration of a public key in lieu of a pin/password. it is then possible for the relying party to make use of the registered public key to verify a digital signature. Assuming that the other characteristics of public/private key infrastructure has been met, then the relying party may infer that the private key was accessed and used in an appropriate manner ... and therefor there is something you have authentication (i.e. the public/private key business process defining the unique access and use of the private key).

in the above example, the fundamental foundation for public/private key business process is that of maintaining the "private key" as private (and has nothing at all to do with digital certificates).

as an adjunct to public/private key business process authentication, there was a business process defined for digital certificates for use in the scenario where the originating party and the relying party have had no prior relationship and that the relying party has no recourse to information about the originating party (either locally or by any online mechanism). the digital certificate is an analogy to letters of credit from the sailing ship days and were targeted at the offline email environment of the early 80s; aka a call was made to the local electronic post office, email exchanged, line was dropped and the receiving party now must authenticate email received from a totally unexpected source having no prior contact.

another issues raised with the identity digital certificates of the early 90s was the problem that a certification authority would be certifying various identity characteristics and including the certified information in the digital certificate. for 3rd party certification authorities who would be doing this process long before any relying party was going to depend on the information and furthermore, the 3rd party CAs might not have any fore-knowledge of what relying parties there might be and what identity information they might require ... there was a tendancy to start overloading identity digital certificates with all possible identity information ... on the off chance some relying party might find it useful for some purpose.

by the mid-90s, it started to become apparent that identity digital certificates overloaded with all sorts of identity information represented a significant privacy (and possibly liability) problem So you saw, at least financial institutions, retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

.... effectively containing nothing more than an account number and a public key. however it was useally trivial to demonstrate that such relying-party-only digital certificates were redundant and superfluous ... aka somebody registers their public key with the relying party, the relying party records the public key in an account record, generates a digital certificate and returns it to the key owner. The key owner originates some transaction (that includes an account number), digitally signs the transactions and packages up the transaction, the digital signature, and the digital certificate and sends the triple off to the relying party.

the relying party, receives the triple, extracts the account number from the transaction, retrieves the account record (that includes the public key) and verifies the digital signature with the public key.

the redundant and superfluous nature of such digital certificates in financial transactions was further exasberated by the fact that a traditional 8583 financial transaction has been on the order of 60-80 bytes. the typical redudandant and superfluous relying-party-only digital certificates from the mid-90s were on the order of 6k to 12k bytes. not only where such relying-party-only digital certificates redundant and superfluous, their sole contribution in 8583 financial transactions was to cause extreme payload bloat, increasing typical transaction message size by one hundred times.

the funny thing that even today you run across descriptions referring to digital signatures being created with digital certificates.

another example of digital certificates is their use in SSL operations
https://www.garlic.com/~lynn/subpubkey.html#sslcert

we were somewhat involved in putting together the business process and various components of the use of SSL for this thing that was going to be called electronic commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

thre predominate SSL use supposedly is a browser validates a digital certificate (using a registered public key that is on file in a trusted public key store maintained by the browser), then checks to see if the domain name is the same that was typed into the browser aka "is the website i'm visiting really the website i think i'm visiting". the issue, at the time, were concerns with the integrity of the domain name infrastructure.

well, it turns out that a typical certification authority isn't actually the authority for the information being certified. in the case of domain name SSL certificates, the certification authority has to contact the domain name infrastructure to validate the entity applying for a SSL domain name certificate, actually is associated with that domain name (the same domain name infrastructure that has integrity issues that give rise to the need for SSL domain name certificates).

so somewhat motivated by the certification authority industry there is this proposal that when somebody applies for a domain name they also register a public key. this will not only improve the integrity of the domain name infrastructure but also reduce processing costs for the certification authority. Currently, a certificate authority needs an application to supply a bunch of identity information ... which the CA then has an expensive and error-prone process cross-checking with the information on file at the domain name infrastructure. If there was an onfile public key, the SSL certificate applicant would just need to digital sign the application. Then the CA retrieves the online, on-file public key from the domain name infrastructure and validates the digital signature (turning a complex, error-prone and costly identification process into a much simpler, more reliable, and cheaper authentication process).

a catch-22 for the certification authority industry is that if the CAs can retrieve online, on-file public keys in real time from the domain name infrastructure ... in theory there is nothing preventing everybody else from also retrieving online, on-file public keys. Registering public keys in an online, on-file repository goes a long way to eliminating the original justification for having SSL domain name certificates at all ... i.e. as per certification authority industry

1) registering public keys (and using them in various business processes) improves the integrity of the domain name infrastructure (so that the certification authority industry can rely on it for checking the owners of certificate applications). however, one of the original justifications for SSL domain name certificates were concerns about the integrity level of the domain name infrastructure. improving that integrity reduces the justification for SSL domain name certificates.

2) everybody being able to retrieve online, on-file, registered, trusted public keys from the domain name infrastructure can eliminate the requirement for getting public keys out of stale, static, redundant and superfluous digital certificates.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: alt.folklore.computers
Date: Mon, 21 Mar 2005 13:43:30 -0700
Brian Inglis writes:
How do you implement read, write, or fetch breaks on addresses using breakpoints, particularly on architectures that supports nested indirect addressing combined with indexing? See other posts on single instruction multidimensional array element fetch. You need to use the address and data bus monitors that were built into mainframes, rarely available on minis, and only recently added to some micros AFAICT. IBM 370s (perhaps 360s) had a similar feature called Program Event Recording that allowed conditions to be set up in control registers to be checked by the microcode and that included DMA I/O accesses AFAIR.

some number of the 370 PER events could be performed using the switches and dials on the front panel of 360s (address compare stop)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI: the end

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI: the end
Newsgroups: sci.crypt
Date: Tue, 22 Mar 2005 07:50:28 -0700
Jean-Luc Cooke writes:
PKI infers only the 2nd of your factors above. PKI doesn't require any biometric or password unless you go out of your way to add it in.

I saw you went into this a bit later in your post with some references to your site. But you're extrapolating requirments from your view of how the technology should be deployed.

What's with the "business process" terminology? PKI isn't a "business process" it's a branch of mathematics. Examples of "business process" are:


asymmetric cryptography is technology. public/private key infrastructure is a business process application of asymmetric cryptography that specifies one of the keys of a asymmetric cryptography key-pair is to be kept consistantly private. the convention of consistantly maintaining the privacy of a specific key in an asymmetric cryptography key is a business process specification.

a relying-party, relies on the belief that the business process specification is being followed when assuming that the verification of a digital signature with a public key implies the something you have authentication (i.e. some entity uniquely is in possession of the corresponding private key).

the convention of consistantly maintaining the privacy and confidentiality of a specific key of a asymmetric key pair is a business process, not a technology. the measures used to maintain the privacy and confidentiality of a private key may be technology. asymmetric cryptography is technology.

the convention of consistantly maintaining the privacy and confidentiality of a specific key of a public/private key pair is a business process. the assumption by a relying-party that the verification of some encoded pieces of data (called a digital signature) by a specific key of a asymmetric key pair (called a public key) implies something you have authentication is a business process. the business process defines the requirements of consistantly maintaining the privacy of a specific key in an asymmetric key pair as part of the business infrastructure where a relying-party can assume that the verification of a "digital signature" with a "public key" implies something you have authentication (from 3-factor authentication paradigm) dependent on some entity being uniquely in possession (access and use) of the corresponding "private key".

there might be other business process mechanisms that might also be specific as part of a 3-factor authentication paradigm (aka a specific authentication infrastructure may not include three unique factors for determining authentication, but a specific authentication infrastructure may be characterized using the 3-factor authentication paradigm).

as part of a basic public/private key authentication infrastructure, the relying parties are assuming that the business process requirements for consistantly maintaining the privacy and confidentiality of a specific key (the "private key") are being met (just because he is told to assume it).

A relying party might also be told that they could assume that as part of a specific authentication infrastructure, a "private key" is uniquely housed in a specific kind of hardware token (say as opposed to an encrypted file). In such a case, the relying party might infer a higher level of integrity and confidence in the associated authentication events (and for example, the relying party might be willing to approve large value transaction amounts than they would if they assumed the overall infrastructure had lower integrity characteristics).

The residence of a private key in a hardware token can be considered technology. The ability for a relying party to assume
"a private key is housed in a specific kind of hardware token with a specific level of hardware integirty and that the specific private key is, in fact, kept unique and private to a specific entity"

is a characteristic of the relying parties belief in the associated (authentication) business process operation.

Various kinds of authentication business process requirements that a relying party could reliably assume to exist might be:
• specific key of an asymmetric cryptography key-pair is consistantly, and reliably kept private and confidential.

• specific key is uniquely and reliably housed in a hardware token of specific integrity characteristics, that the key(s) were generated internally inside the hardware token and there are no provisions for a specified "private key" to be exported from the token

• a specific hardware token only operates in a specific way when a pin or password has been provided to the token

• a specific hardware token only operates in a specific way when a biometric value is matched to a template inside the token


the ability for a relying party to assume "from the verification of a digital signature with a specific public key" might imply any of the above conditions to be true, is a characteristic of business process ... not just the technology.

a business process can be any operations or sequence of steps that the parties have agreed it to be. keeping a specific key of an asymmetric cryptography key pair, reliably and consistantly private and confidential isn't an attribute of the asymmetric cryptography technology, it is an attribute of the public/private infrastructure business process (which makes use of asymmetric cryptography technology).

in my original posting ... i may have created some confusion by sometimes referring to an authentication infrastructure within the context of a 3-factor authentication paradigm ... by just typing 3-factor authentication ... w/o intending to mean that all three factors were actually involved in any specific authentication infrastructure instance and/or deployment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI: the end

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI: the end
Newsgroups: sci.crypt
Date: Tue, 22 Mar 2005 08:18:32 -0700
Jean-Luc Cooke writes:
VeriSign owns Network Solutions. Details, I know.

at the time we were doing the stuff that was to become called e-commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we were asked to work with this little client/server startup that wanted to have their server do payments and they had this technology that they called SSL.

as part of working out the business process for e-commerce relying on SSL technology, we did detailed walk-thrus and audits of the various business processes ... including the major entities that were supplying these things called SSL domain name digital certificates.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

in general, none of the trusted 3rd party certification authorities (whether issuing SSL domain name digital certificates or certifying other kinds of information) were the actual authoritative agency for the information they were certifying.

much later, one of the major trusted 3rd party certification authorities that we had done a detailed audit on, did buy one of the authoritative agencies for domain name ownership information. It was also in that time-frame that you found a big push for having a domain name applicant register a public key at the same time they registered their domain name.

just because one of the 3rd party certification authority business operations purchased one of the authoritative agencies responsible for domain name ownership didn't negate the fact that their business operation involved in issuing SSL domain name infrastructure had to contact various authoriative agencies responsible for domain name ownership information (including the criteria that not all domain name ownerships were handled thru a single domain name infrastructure business operation).

in any case, the business operations responsible for issuing SSL domain name certificates were still having to contact (one or more) authoritative business operations responsible for the information being certified. At the heart of it was that there was identification information included with the domain name registration. There was also identification information included with the SSL domain name certificate application. The problem facing the SSL domain name certificate (certification authority) industry was
• integrity issues with the operation of the domain name infrastructure that might affect the validaty of any certification and therefor any SSL domain name certificate that was issued (issues that also contributed to the requirement for SSL domain name certificates)

• error-prone, complex, and expensive process of matching the identification information provided with the SSL domain name certificate application to the identification information provided with the domain name application.


A solution (in part backed by the certification authority business/industry) was that domain name applications register a public key at the same time they registered the domain name. This would
• improve the overall integrity of the domain name infrastructure by requiring that various business processes be digitally signed by the domain name owner (and the domain name infrastructure could verify with the on-file public key)

• the certification authority indistry then could require that SSL domain name certificate applications be digital signed. The certification authority industry then could change an error-prone, complex and expensive identification business process into a much simpler and less expensive authentication business process (by retreaving the on-file public key from the domain name infrastructure and verifying the digital signature on the SSL domain name certificate application).


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI: the end

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI: the end
Newsgroups: sci.crypt
Date: Tue, 22 Mar 2005 10:12:04 -0700
Jean-Luc Cooke writes:
What's with the "business process" terminology? PKI isn't a "business process" it's a branch of mathematics. Examples of "business process" are: - "mass production", - "universal pricing", and - "third world labour producing goods to the first world"

ref:
https://www.garlic.com/~lynn/2005e.html#24 PKI: the end
https://www.garlic.com/~lynn/2005e.html#25 PKI: the end

aka ... "public" and "private" are business process constructs.

a digital certificate may be implemented using asymmetric cryptography technology ... but what an identity x.509 certificate represents is a business process defintion. the certification authority operation and its certifying of infomration, as well as much of the processes surrounding identity are business processes.

during one of the early audits we did on one of the certification authority businesses, they commented that they had started out thinking that certification authority was mostly mathematics but they quickly found out that it was a service business and that possibly 95% of the business operation involves adminstrative and business process procedures and that technology is only a very small miniscule portion.

and for a little topic drift ... another flavor of 3-factor authentication discussion:
https://www.garlic.com/~lynn/aadsm19.htm#1 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#2 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#3 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#4 Do You Need a Digital ID?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI: the end

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI: the end
Newsgroups: sci.crypt
Date: Tue, 22 Mar 2005 10:34:57 -0700
Anne & Lynn Wheeler writes:
during one of the early audits we did on one of the certification authority businesses, they commented that they had started out thinking that certification authority was mostly mathematics but they quickly found out that it was a service business and that possibly 95% of the business operation involves adminstrative and business process procedures and that technology is only a very small miniscule portion.

also as further aside, the major trusted 3rd party certification authority at the time asked if we knew any organization (that was already in serious, industrial strength service business) that might be interested in outsourcing their complete operation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Computerworld Article: Dress for Success?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computerworld Article: Dress for Success?
Newsgroups: alt.folklore.computers
Date: Tue, 22 Mar 2005 19:14:18 -0700
Brian Boutel writes:
When my son got married, he refused to wear a tie for the ceremony, choosing a beautiful suit with an open-necked shirt, as did my other son, who was best man. The bride decided to be traditional, so wore a lacy and bejewelled gown by Reem Acra, while her father insisted on formality and wore a morning suit. This left me in a bit of a dilemma, so I compromised on a suit with a tie. Black tie was specified for the big evening dinner, but most people removed, or at least untied, them when the dancing started.

did you know you can get a boot cut tux made?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Using the Cache to Change the Width of Memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using the Cache to Change the Width of Memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 23 Mar 2005 15:04:01 -0700
Brian Inglis writes:
Although it wouldn't surprise me if someone, somewhere wasn't still running the software emulators on crufty old binaries, rather than pay for redevelopment. I've also known managers whose primary function seemed to be to block any possible replacement for some dusty old deck they created when they were a youngster.

when Amdahl was starting his mainframe clone company in the early 70s, he gave a talk at mit. he was asked about how he got venture funding. he said something about his business plan pointing out that there was already over $100b spent on 360 applications and that even if ibm walked away from 360 architecture at that moment (could be construed as a thinly veiled reference to the radically different FS architecture being worked on), that customers would still be buying 360 processors 30 years later (i.e. 2000).

the issue was that a large number of applications are meeting requirements and the cost of rewrite as well as possibly opportunity lost cost (putting scarce resources to work on rewriting a working application rather than getting out something brand new) wasn't less than any expected cost savings.

in some cases, there was significant risk issue also considered ... i.e. the current implementations are known to do the job ... there have been numerious mainframe "modernization" (rewirte) projects where hundreds of millions (and even billions) have been spent ... and the projects failed.

misc. past FS (future system) posts:
https://www.garlic.com/~lynn/submain.html#futuresys

in the late 60s, i saw a case of a university administrative payroll application appear to have a problem. It had started out as a 407 plug-board application ... which got translated by something (? I don't remember the details), which got translated to 709 cobol, which got translated to 360 cobol. at the end of the program it was still outputting emulated 407 sense settings on the printer. one day the operators notice that the 407 values were differen't than they had been. everything was stopped and the whole datacenter was put on hold while they tried to contact somebody in administrative dataprocessing to see what might need to happen (did it fail and payroll would have to be run again?). They finally found somebody, but couldn't find anybody that knew what the 407 values were supposed to indicate ... so the decision was to run the program again ... and if they came out the same, assume everything was ok.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Computerworld Article: Dress for Success?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computerworld Article: Dress for Success?
Newsgroups: alt.folklore.computers
Date: Wed, 23 Mar 2005 18:22:47 -0700
Greg Menke <gregm-news@toadmail.com> writes:
Yeah, those hats also include holders for 2 beer cans and a plastic tube that the wearer can use to siphon beer into their mouth. Texan formal wear- cool. :)

how far is it from austin to dallas?

the texans claimed it was three six-packs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public/Private key pair protection on Windows

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public/Private key pair protection on Windows
Newsgroups: alt.computer.security
Date: Fri, 25 Mar 2005 08:29:50 -0700
"Edward A. Feustel" writes:
For best results, generate your private key in a token (2048 bits or more preferred) and NEVER export it to a Windows machine. Only insert the token in the USB port when you must and only as long as it must be there. Further, be certain that any buffering of the key by Windows is erased when you are done using the private key, i.e., close the session, or turn off the power for at least 20 seconds. This will minimize the chances that someone else will acquire access to your key or be able to tale end your sessions. Ed

for rsa key-pair ... there is also specification for ecdsa, related thread reference with pointer to nist fips186-2 ecdsa
https://www.garlic.com/~lynn/2005e.html#22 PKI: the end

because of various issues with pc vulnerabilities .... there is the EU FINREAD standard ... misc posts:
https://www.garlic.com/~lynn/subintegrity.html#finread

... where you have a separate unit connected to pc with display and keypad that directly talks to the token ... for accurately displaying transaction and safely entering a token's pin/password.

use of hardware token addresses direct copying of the private key. EU FINREAD attempts to address a couple additional vulnerabilities:

there is also a dual-use attack.

digital signature infrastructure primarily is a mode from 3-factor authentication ...
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you know
something you have
something you are


where the relying party successfully validating the digital signature can assume that the originating party is in possession of the corresponding private key (aka something you have authentication).

A digital signature authentication scheme may be a flavor of challenge/response (countermeasure for replay attacks) ... where the relying party transmits some random bits which the other end digitally signs and returns the digital signature. the relying party then validates the digital signature with the public key ... which is proof that the other end is in possession of the corresponding private key (aka something you have authentication).

Some infrastructures have also looked at use of public/private key digital signatures to imply more than simple authentication ... aka that verification of a digital signature is equivalent to a human signature ... which not only implies something you have authentication, but also implies something similar to a human signature, aka implication of reading, understanding, approving, agreement, and/or authorization.

A dual-use attack is when the same private key is used for both 1) authentication events where random bits (that are never viewed, read, or understood) are digitally signed and 2) human signature events where there isn't some additional additional proof that some human has actually read, understood, arpproved, agreed, and/or authorized the related bits being digitally signed.

So a dual-use attack is for some attacker, in a supposedly purely authentication operation, transmit some bits for digital signing that purports to be random ... when the bits actually can be interpreted to represent some obligation as in a human signing event. A possible analogy is in the MASH show where Radar is getting the col. to sign stuff where the col. isn't actually reading what he is signing.

Part of the issue may be the semantic ambiquity with the term "digital signature" ... where the use of the word "signature" is automatically taken to imply some relation to "human signature" ... even tho "digital signature" can be commonly used in situations where there is no implication at all of the equivalent conditions for human signature (read, understood, agreed, approved, and/or authorized).

somewhat unrelated, hardware tokens can also be considered a phishing countermeasure. A lot of phishing is social engineering, convincing people to perform electronic act that makes them vulnerable (divulging their userids and passwords and other information that enables things like account theft and/or id-theft ... where transactions and/or other obligations happen w/o the person's knowledge).

When a hardware token is also required, it is probably going to be somewhat more difficult to convince a victim to mail off their hardware token. It still doesn't eliminate the social engineering where the attacker convinces the victim to directly execute the transactions for the benefit of the crook (however, it does somewhat minimize the ability for the crook to do their own fraudulent transactions w/o the owner's knowledge).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Stop Me If You've Heard This One Before

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stop Me If You've Heard This One Before
Newsgroups: alt.folklore.computers
Date: Fri, 25 Mar 2005 09:54:48 -0700
Peter Flass writes:
I recently got into this with someone else. I don't have the reference handy, though I can look it up, but IBM has a manual on line with lots of 3270 keyboard layouts, including, IIRC, the 3278. Of ouurse there isn't "a" 3278 keyboard layout, but you probably want something like the"US text keyboard." Let me know offline if you can't find what you want, and I should be able to track down a link to the manual or a hardcopy of it.

we got into a battle with kingston when they were bringing out 3278 about PF keys moving to the top (vis-a-vis the 3277). their argument was that the 3278 was designed primarily for data entry (clerks) rather than interactive computing or programming.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Stop Me If You've Heard This One Before

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stop Me If You've Heard This One Before
Newsgroups: alt.folklore.computers
Date: Fri, 25 Mar 2005 10:22:03 -0700
originally, there was 3278 with no PFKEYS at all and they had taken the PFKEYS location on the 3277 and turned it into numeric keypad for data entry applications.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 27 Mar 2005 10:16:24 -0700
Andrew Swallow writes:
I also found out about XML (Extended Markup Language) format. That may be a modern alternative to RTF format.

the original was GML (generalize markup language) invented in '69 at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

... the letters G, M, and L actually stand for the initials of the last names of three people at the science center.

Madnick had done script command in the mid-60s for CMS ... which had "dot" commands for document formating. GML tag support was then added to script command. The difference between the dot commands and GML was that dot commands explicitly defined document formating while GML tags could label the data independent of the specific formating rules for that type of data. This enabled that the tagging/labeling of data to be used for other than purely formated. There were loads of documents done in script and later script/gml in the 70s.

this was eventually standardized in ISO as sgml in the late '70s
https://www.garlic.com/~lynn/submain.html#sgml

there has been some postings on the web showing the evolution of early html from sgml origins ... highlighting various waterloo script/gml documents at cern.

cern and slac were somewhat sister organizations sharing applications and development of some of the applications. they both had large vm/cms installations. univ. of waterloo was also a large vm/cms shop and had done some number of enhanced and/or alternative implementations of various standard vendor vm/cms products (this was still from the period of open source ... before more heavily trend into object-code-only). misc.

early history of html
http://infomesh.net/html/history/early/
a history of scientific text processing at cern
http://ref.web.cern.ch/ref/CERN/CNL/2001/001/tp_history/

somewhat as an aside ... slac then had the first webserver in the US.

the early world web web at slac
http://www.slac.stanford.edu/history/earlyweb/history.shtml

somewhat as a pure aside, the current w3c offices in cambridge are only a couple blocks from the old science center offices.

some subject drift ... some recent xml posts in comp.databases.theory
https://www.garlic.com/~lynn/2004l.html#72 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#3 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004n.html#11 XML: The good, the bad, and the ugly
https://www.garlic.com/~lynn/2004n.html#12 XML: The good, the bad, and the ugly
https://www.garlic.com/~lynn/2004p.html#38 funny article
https://www.garlic.com/~lynn/2004q.html#6 XML Data Model
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005.html#26 Network databases
https://www.garlic.com/~lynn/2005.html#27 Network databases
https://www.garlic.com/~lynn/2005.html#29 Network databases

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 27 Mar 2005 13:20:28 -0700
Anne & Lynn Wheeler writes:
cern and slac were somewhat sister organizations sharing applications and development of some of the applications. they both had large vm/cms installations. univ. of waterloo was also a large vm/cms shop and had done some number of enhanced and/or alternative implementations of various standard vendor vm/cms products (this was still from the period of open source ... before more heavily trend into object-code-only).

some open source topic drift.

in the '60s you found lots of software being distributed for free, in many cases along with the source.

federal gov. was bearing down on big mainframe about charging for the computer but giving away the software ... as a form of bundling. so june 23rd, 1969 was the big "unbundling" announcement ... helping to satisfy the federal gov. wishes. application software started being charged for separately from the computer hardware. kernel software continued to be free (or bundled) on the theory that the kernel was a part of being able to operate the hardware.

one of the projects i worked on as an undergraduate was reverse engineering the mainframe channel interface and building our own channel board and putting it in an interdata/3 that was programmed to emulate a mainframe telecommunication controller. this gotten written up someplace as the four of us spawning the pcm/oem/clone mainframe controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

later the future system project was spawned to create a brand new mainframe product offering ... radically different than the existing. one of the main driving factors behind FS project was the pcm/oem/clone controllers ... and that FS would provide a degree of integration between all the system components that would make it extremely difficult for individual pieces to be substituted by other vendors.
https://www.garlic.com/~lynn/submain.html#futuresys

in the early 70s, gene Amdahl gave a talk at mit auditorium about starting his new mainframe clone computer company. when asked about the VC business justificaion one of his points was that there had already been at least $100b spent on software applications by customers and even if ibm totally walked away from the existing mainframe business that day (can be construed as veiled reference to the future system project), there would be still customer demand for buying 360/370 mainframes thru at least 2000. there are various other rumors that what prompted gene to leave and start his clone computer company was the FS project ... he wanted to build a better, faster 360 and disagreed with the FS direction.

So by the time gene started shipping his clone mainframe, there was started to be some pressure to take a new look at whether to price for kernel software. You were just starting to see the leading edge of hardware technology where it was becoming significantly cheaper to design and manufactur a new computer than it was to design and develop a new operating system (kernel). Up until that time, the majority of the operating systems had been mostly proprietary to the mainframe vendors. You are now just starting to see hardware vendors producing a new computer and not wanting the expense of also doing a whole operating system from scratch.

about this time they were deciding about making my operating system resource manager a product. I got to do it almost like a one person startup; algorithms, architecture, design, develop, code, test, validate, benchmark, document, teach classes, releases, maintenance, business cases, pricing, etc ... except i got to do it within a large corporate infrastructure and needing to interface with the established processes. So the resource manager got elected to be the first guinea pig for pricing kernel software. i got to spend some amount of time over six month period doing business, pricing and forecasting stuff for kernel priced software. This particular exercise resulted in policy that kernel software could be priced (analogous to application software) as long as it wasn't directly required for hardware support (aka stuff like device drivers).

Over the next several years, more and more stuff fell into the "priced" category and less stuff in the "free" category ... until the policies had changed so that the complete kernel was priced. Part of the issue was that for some components, the issue of independent pricing resulted in billing costs (given the billing processes at the time) comparable or larger than the actual revenue stream.

also, with pricing for all components, there started to be a big push for object-code-only ... no more shipping source and/or using source maintenance processes.

in this period there were studies that claimed things like there were as many lines of "kernel" code (enhancements) on the share/waterloo "tape" (distribution) as there was (lines of code) in the base kernel product shipped directly from the vendor.

so much of the '80s and 90s was object-code-only and priced software (as opposed to the earlier period of freely distributed software and source).

misc. posts related to resource manager:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and benchmarking/validation ...
https://www.garlic.com/~lynn/submain.html#bench

some topic drift about pricing/forecasting .... there was a floor limit from the federal gov. that the price had to cover the costs ... people design & development, ongoing support, etc. Major part were upfront costs which then were amortized over the per unit sales. Given some experience and a lot of data, typically there was a "low", "medium" and "high" price selected and then a total number of unit sales forecast based on price (in part to see if there was any price elasticity in the market). Each price level times the forecasted market size had to at least cover all the (including significant upfront) costs.

there were a number of other pricing guidelines. in the mid-70s one of the reasons given for killing the VAMPS (5-way smp) project
https://www.garlic.com/~lynn/submain.html#bounce

was that we could only show something like $8b-$9b total revenue over five years ... and the supposed corporate requirement for any distinct mainframe offering was minium $10b revenue over five years (if you couldn't show at least $10b revenue, it wasn't worth doing).

there was a totally different problem that showed up with the kernel software pricing policy. I was designing the VAMPS 5-way smp architecture and also including some of the design features in the resource manager code. The resource manager shipped as standard kernel product and VAMPS was killed. However, it was later decided to do a more conventional, purely software SMP implementation. in VAMPS i got to have some latitude with implementing SMP constructs in the microcode of the machine. This required some remapping when it was decided to do a purely software only kernel implementation supporting SMP.

Then it came time to ship the release with SMP support in it. It is fairly obvious that kernel multiprocessor support is directly supporting hardware features and therefor according to the policy at the time had to be "free". The problem was the design and implementation had been done assuming a lot of the code in the resource manager, which was already shipping to customers as "priced" software. Having "free" software with a prerequisite on "priced" software was a violation of the pricing policy for kernel software. The eventual result was that something like 80-90 percent of the code in the resource manager was repackaged as part of the "free" kernel.
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Mon, 28 Mar 2005 07:57:15 -0700
Peter Flass writes:
Sorry to disappoint you. I first started working with PL/I about 1966, and it wasn't my first language. Let the young folks have the new ideas.

i have vaque memories of ibm coming by the university and demonstrating as yet unreleased PLI product ... but i didn't think it was until mid or late '67. they had loaded it from tape on a 2314 and left it there for a week while they gave talks and demos. they then were careful to make sure it was all scratched before they left. later there was some issue raised about whether the pack had been backed up during the period that PLI had been loaded.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Mon, 28 Mar 2005 10:35:11 -0700
iain-3 writes:
So here's a question for you: what sort of machine instruction would be useful to tell the CPU how to do a multidimensional interpolated table lookup? Since it's probably got too many inputs, and maybe too many outputs, and too many variations (dimensions, interpolation datatype, interpolation scheme, etc), it should probably be factored into a few ops. What are those ops?

not a table ... but tree stuff, is luther woodrum's tree stuff that got put in mainframe

sorting instructions:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.7?SHELF=EZ2HW125&DT=19970613131822#HDRAA4H1

tree format
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.7?SHELF=EZ2HW125&DT=19970613131822#HDRAA4H1

example of use of sort instructions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.7.2?SHELF=EZ2HW125&DT=19970613131822&CASE=

compare and form code word
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.21?SHELF=EZ2HW125&DT=19970613131822

update tree
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.99?SHELF=EZ2HW125&DT=19970613131822

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

xml-security vs. native security

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: xml-security vs. native security
Newsgroups: sci.crypt
Date: Mon, 28 Mar 2005 10:51:31 -0700
securenix writes:
I am involving in the development of an application framework focusing on web services. Messages excnahged among framework principals are SOAP messages.

For security issues like encrypting, signing, etc. I am planing to use native byte[]-oriented cryptographic techniques. But I see around xml-security solutions (e.g. WS-Security, etc.). I wonder what advantages the xml-security bring us over native-security solutions? Do native-solutions cause any bottleneck for web services?


an earlier issue with asn.1 vis-a-vis xml for digital signatures were in financial transactions.

there were a number of applications that asn.1 encoded the transaction for digitally signing, and then transmitted the transaction in its basic format along with the digital signature. the receiver would take the transmitted transaction, re-encode in asn.1 and then check the digital signature.

part of the issue was that many financial transactions were/are on the order of 60-80 bytes and that asn.1 encoding would significantly increase the size ... as well as a lot of intermediate legacy processes weren't prepared to handle a transaction in asn.1 encoded format.

the objective was to simple add message integrity and origina authentication to existing financial infrastructure ... with a digital signature ... w/o having to totally scrap the existing legacy financial transaction infrastructures.

the problem with using xml encoding (at the time) rather than asn.1 was that the xml encoding rules weren't deterministic ... aka the origin took a basic financial transaction message and encoded it before digitally signing ... and the destination had to take the same financial transaction message and (re)encode it and come up with the same, exact bit-stream for the digital signature to verify (which couldn't be guaranteed with xml at the time).

FSTC created FSML to provide deterministic XML encoding rules for digitally signed financial transactions ... this was later donated to w3c and absorbed into the XML digital signature specification.

a separate issue from the mid-90s was not only the xml encoding rules of a financial transaction (to avoid the payload bloat of transmitting the encoded format as well as replacing all the legacy financial transaction software) ... but also trying to use digital signatures for financial transactions ... where the origin also included a digital certificate with every transmission (even tho the destination/relying-party was already in possession of a registered public key for the origin). In the mid-90s, digital signature financial transaction pilots, the typical size of the certificates used were in the range of 6kbytes to 12kbytes.

given a base financial transaction of 60-80 bytes, not only were the appended certificates redundant and superfluous (since the destination already had a registered public key for the originator) but their apparent primary purpose was to cause enormous payload bloat and increase the financial transaction message size by a factor of one hundred times.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

xml-security vs. native security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: xml-security vs. native security
Newsgroups: sci.crypt
Date: Mon, 28 Mar 2005 12:36:59 -0700
Bruce Stephens <bruce+usenet@cenderis.demon.co.uk> writes:
My guess is that it's the same kind of difference as with OSI: rather than checking the signature of the bytes (in BER) you got over the wire, you can encode the abstract value in a particular way (DER) and check the signature of that.

which OSI is this ... open system interconnect? ... ISO (international standards organization) model for networking?

ISO has standards for certificates, including requirements for including ASN.1 encoded digital certificates with the transmission of digitally signed financial transactions ... previous reference:
https://www.garlic.com/~lynn/2005e.html#38 xml-security vs. native security

misc. other references:
https://www.garlic.com/~lynn/subpubkey.html#rpo

OSI (as in ISO's OSI model) evolved in the late 70s and early 80s concurrently with the internetworking protocol ... the arpanet/internet had the great switch-over from an early homogeneous (much more OSI-model like) to internetworking on 1/1/83.

in the late '80s several govs. had mandates that the internet be eliminated and the whole thing switched to OSI (US federal government had various "GOSIP" mandates).

in the late '80s I was evolved with trying to get HSP (high speed protocol) accepted as a work item in x3s3.3 (ISO charterd ansi standards body responsible for networking related standards). at the time, ISO had a mandate that networking related standards couldn't deviate/violate from the OSI model.

HSP would:

1) go directly from transport/level4 to mac/lan interface 2) support internetworking (aka tcp/ip) 3) support max/lan interface

HSP was rejected based on the ISO mandates because

1) it violated OSI model by skipping the transport/network, level 3/4 interface

2) it violated OSI model by supporting tcp/ip ... aka OSI was traditional private homogeneous networking model and didn't include provisions for internetworking, gateway, etc. ... and therefor HSP violated the OSI model by supporting internetworking

3) mac/lan interface violates the OSI model with the mac/lan interface corresponding to approx. the middle of layer 3. Anything supporting mac/lan interface violates the OSI model. HSP supported the mac/lan interface, therefore HSP violated the OSI model.

misc. past comments:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

for a little topic drift ... an unrelated recent post on xml
https://www.garlic.com/~lynn/2005e.html#34 Thou shalt have no other gods before the ANSI C standard

misc other xml, html, sgml, gml posts
https://www.garlic.com/~lynn/submain.html#sgml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

xml-security vs. native security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: xml-security vs. native security
Newsgroups: sci.crypt
Date: Mon, 28 Mar 2005 15:54:23 -0700
Bruce Stephens <bruce+usenet@cenderis.demon.co.uk> writes:
That may be so. I was thinking of the ITU standards (some of which may be reflected in ISO standards, too). In particular, those for X.500 strong authentication and signed operations (X.509, X.511).

They used to specify that signatures were of the DER encoding of the abstract values (i.e., it would be OK for a recipient to decode the received values, reencode them using DER, and then check the signature). But more recently it seems that the signature is supposed to be checked against the actual received encoding. (Presumably there's still some place for DER, but it seems a much smaller one, IMHO.)


i was at the acm sigmod conference in the very early 90s and somebody asked what was this x.500/x.509 stuff that was happening in ISO and somebody else explained that it was a bunch of networking engineers attempting to reinvent 1960s era database technology.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

xml-security vs. native security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: xml-security vs. native security
Newsgroups: sci.crypt
Date: Tue, 29 Mar 2005 06:19:43 -0700
Bruce Stephens <bruce+usenet@cenderis.demon.co.uk> writes:
And yes, the whole thing (most definitely including the very relevant X.509) always seems way too complex to me. (I still have no idea what non-repudiation is supposed to mean in a certificate.)

well, in the mid-90s ... there was some push that if the consumer digitally signed a financial transaction ... and if the relying party (merchant) could find any certificate for the cosnumer's public key that contained the non-repudiation bit ... it would shift the burden of proof from the merchant to the consumer in any dispute. It appeared to be a ploy trying to get the merchants to underwrite the enormous cost of a PKI deployment for consumer certificates (since shifting the burden of proof in disputes represents a significant cost).

besides the whole issue of the verification of a digital signature simply implies some form of something you have authentication (i.e. the verification of a digital signature implies that the originator has access and used the corresponding private key) ... and by itself can't carry with it the meaning of a human signature (observed, read, understood, agrees, approves, authorizes) .... there is the whole issue that the standard PKI related protocols have no provision for prooving which certificate somebody originating a digital signed message ... actually included in a transaction.

assuming it did come to have any meaning, one attack is for a merchant to convince some certification authority to issue certificates with the non-repudiation bit turned on ... for all public keys that the merchant happened to encounter. Since the attached certificate is not normally part of the signed message in standard existing PKI protocols ... there is no proof as to which certificate a consumer might have actually appended to any digital signed message.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

xml-security vs. native security

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: xml-security vs. native security
Newsgroups: sci.crypt
Date: Tue, 29 Mar 2005 07:45:18 -0700
the EU finread "reader" attempts to address some of the issues related to human signatures as part of a non-face-to-face authentcation environment. misc. posts mentioning finread:
https://www.garlic.com/~lynn/subintegrity.html#finread

I guess it is somewhat modeled after point-of-sale terminals you see at check-out counters. the issue of authentication (something you have card and something you know pin) is orthogonal to whether you actually read and agreed with what is being authenticated (aka it displays the transaction amount and asks if you agree then push the "yes" button). these POS terminals supposedly have security modules and won't fraudulently display incorrect values and/or simulate "yes" buttons when it didn't happen.

while the basic EU finread is supposedly required to perform similar operations ... if a relying-party receives a transaction that was digitally signed by a hardware token using a finread reader .... there is nothing that is part of the standard that prooves that a finread reader was actually used (as opposed to any other kind of reader).

For instance, 1) there is no registry of finread readers that can be cross-checked 2) the finread terminal isn't required to also digitally sign the transaction ... aka not only evidence for the relying party that the consumer was authenticated ... but there is also some sort of evidence back to a specific finread reader that would imply anything happened other than simple authentication (finread co-signs the transaction using its own private key and there is a public registry of the corresponding public key). given certified operation characteristics of a finread readers .. then when a relying-party verifies a finread digital signature ... then it implies that a specific finread reader was involved AND that the finread reader followed specific conventions.

some past posts mentioning non-repudiation services (as opposed to a non-repudiation "flag")
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#14 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm12.htm#59 e-Government uses "Authority-stamp-signatures"
https://www.garlic.com/~lynn/aadsm16.htm#18 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/2003h.html#38 entity authentication with non-repudiation

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Actual difference between RSA public and private keys?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Actual difference between RSA public and private keys?
Newsgroups: sci.crypt
Date: Sat, 02 Apr 2005 09:15:26 -0700
"Daniel Harms" writes:
Is there a difference between RSA public and private keys other than that one is dubbed (and treated as) private and the other public?

the technology is asymmetric key cryptography ....

the business process is to designate one of the asymmetric key-pair as private and the other of the asymmetric key-pair as public. The key designated "public" is made available to other parties. The key designated "private" should never be divulged.

The business process use of digital signatures is that a hash of some message/transaction is encoded with the "private" key and the message/transaction is transmitted along with the digital signature. The recipient can use the "public" key (that has been made available) to validate the digital signature ... which would 1) indicate the message has been altered since originally signed and/or 2) authenticate the origin.

in 3-factor authentication paradigm
something you knowsomething you havesomething you are

the validation of a digital signature with the public key implies something you have authentication .... aka it implies that the originator has access to and was able to make use of the corresponding "private" key.

the foundation of the business process is based on the ability to keep the "private" key secure, confidential and not have it generally divulged. the degree that a recipient and/or relying party has trust in such a business process is the degree that they believe the "private" key hasn't been divulged or compromised.

Another use of the business process is that somebody can encode a message with a published public key and only the entity with the corresponding private key is able to decode the message.

misc. related postings:
https://www.garlic.com/~lynn/aadsm19.htm#1 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#2 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#3 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#4 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#5 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#6 Do You Need a Digital ID?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Using the Cache to Change the Width of Memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using the Cache to Change the Width of Memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 02 Apr 2005 17:24:54 -0700
a little topic drift
https://www.garlic.com/~lynn/rfcidx13.htm#4042
4042 I UTF-9 and UTF-18 Efficient Transformation Formats of Unicode, Crispin M., 2005/04/01 (9pp) (.txt=19123) (Refs 959, 2130, 2152, 2276, 2781)

and/or
https://www.garlic.com/~lynn/rfcietff.htm

4042 is currently the top entry in the bottom from in the above URL. as always, clicking on the ".txt=nnnn" field retrieves the actual RFC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TLS-certificates and interoperability-issues sendmail/Exchange/postfix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS-certificates and interoperability-issues sendmail/Exchange/postfix ..
Newsgroups: comp.security.unix,comp.mail.misc,comp.mail.sendmail
Date: Sun, 03 Apr 2005 10:40:15 -0600
per@hedeland.org (Per Hedeland) writes:
No, and since this is specifically *not* what I'm talking about, while you keep insisting that it is, I'm clearly not able to make you understand what I'm saying - regardless of the reason for that, I thus see no point in continuing this discussion.

For anyone else that may have suffered through this thread, the point of my original post, now lost in the noise, was not primarily to assert that certificate validation doesn't happen, but to point out that it is in many cases quite feasible to make it happen even without certificates signed by "official" CAs.


note that the original basic public/private key business process use of asymmetric key technology was that public keys could be openly distributed (and used subsequently for validating digital signatures generated with the corresponding private key).

the original PKI model was in the days of offline email (aka dial-up the local electronic post office, exchange email, hang-up, process) involving email between two parties that previously never had any contact and that the recipient had not other recourse (like online resource) to checking on the sender.

the basic mechanism is that the recipient has a trusted store of public keys and their associations. in the pgp (and ssh and other) model, this trusted public key store contains public keys of that the recipient has previously "registered". in the pgp model, the recipient validates digital signatures directly using public keys from the trusted public key store.

this gets a little more complicated in the PKI model ... the recipient's trusted public key store is now one-level (or more) indirection.

certification authorities set up business processes for registering public keys (analogous to the business process that oridinary individuals use for registering public keys in their trusted public key store) ... they create these things called certificates ... that represent some certification business process performed by the CA. the certificates typically "bind" a public key and some information about the public key owner in a string of bits, which are digitally signed by the CA. this digital certificate is returned to the "key owner". The key owner, at some point in the future, generates some sort of message, digitally signs the message, and packages the message, the digital signature, and the certificate for transmission.

the recipient gets this package triple, and validates the CA's digital signature on the certificate (using the CA's public key from the recipient's trusted public key store). the recipient then takes the sender's public key from the digital certificate and uses it to validate the digital signature on the actual message.

basically these (offline paradigm) digital certificate things are analogous to letters of credit from the sailing ship days (when the relying party/recipient had no previous interaction and/or no other recourse to establish anything about the party they were dealing with).

For various processing reasons, many of the PKI implementations use something called a self-signed certificate as part of registering a CA's public key in the recipient's trusted public key store ... they look and smell and have a format like a "regular" digital certificate but are used in a totally different business process. The CA self-signed digital certificates are part of the business process of registering a public key in the recipient's trusted public key store (analogous to the PGP model that directly registers senders' public keys in the recipient's trusted public key store).

A big issue (in today's market) involving the recipient's trusted public key store, is a) whether the recipient's trusted public key store is part of a specific application and b) whether the application comes preloaded with some number of trusted (certification authority) public keys (on behalf of the recipient). Some number of certification authorities have paid big bucks to application vendors to have their public keys (whether they are packaged as a self-signed digital certificate or not) preloaded in the application trusted public key store shipped to consumers.

In the early '90s there was a big push for x.509 identity certificates. Part of the problem was that there was no prediction (at the time that the CA generated a digital certificate) ... what were all the uses a digital certificate was going to be put to (what kind of business process and what kind of identity information would be meaningful to the recipient or relying parties). They also wanted to charge $100/certificate/annum for each digital certificate issued ... so it wasn't going to be a common event. The result was a tendency to grossly overload these identity digital certificates with information ... in the anticipation that some relying party (recipient) might find it useful for something in the future.

By the mid-90s, some number of relying party institutions were starting to realize that such digital certificates, grossly overloaded with information, represented a significant liability and privacy exposure. You then started seeing some institutions (like financial) migrating to relying-party-only certificates ... certificates that basically only contained some sort of account number or other identifier that could be used in a real-time lookup (recipient's local database or other online operation).
https://www.garlic.com/~lynn/subpubkey.html#rpo

An issue with relying-party-only digital certificates was that it was trivial to show that they were redundant and superfluous. Normal operation was that the identifier was also available in the body of the basic, digitally signed message and the identifier would index some sort of real-time record lookup (local and/or online) that not only contained information about the sender ... but also the sender's public key (i.e. the information base became a real-time trusted public key store, in addition to all the other trusted information that might be available).

In the financial arena for these relying-party-only certificates, the consumer registers their public key with their financial institution and their financial returns a digital certificate containing an account number and the public key. at some point in the future, the consumer generates a financial transaction and digitally signs the financial transaction. The consumer then packages the financial transaction, the digital signature, and the digital certificate and transmits it to the financial institution. The financial institution's record index (account number) that is in the certificate is duplicated in the transaction. The financial institution receives the "package", discards the digital certificate, uses the account number from the transaction to retrieve the account record (including the consumer's public key) and validates the consumer's digital signature using the real-time retrieved public key.

One of the issue's from the mid-90s with these financial relying-party-only digital certificates was that they were on the order of 4k-12k bytes (just containing an account number and a public key)while the basic financial transaction is on the order of 60-80 bytes. Not only was it trivial to show that these relying-party-only certificates were redundant and superfluous, but their only apparent purpose was to cause extreme payload bloat and increase the standard financial message size by a factor of one hundred times.

The original SSL (TLS precursor) was part of browsers for authenticating that the web server they thot they were talking to were actually the web server they were talking to. The concern was over integrity issues in the domain name infrastructure being subverted and a browser getting redirected to a fraudulent web site (basically the browser compared the domain name from what the typed in URL with the domain name in the digital certificate presented by the web site).

some early history comments about SSL for e-commerce and payment transactions:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

basically browsers came preloaded with a large number of certification authority public keys.

somebody would apply to a SSL domain name certification authority providing a lot of identity information. The certification authority then would contact the domain name infrastructure in an attempt to cross-check the applicant's identity information with the identity information on-file with the domain name infrastructure as to the true domain name owner. This was an error-prone, complex, and costly undertaking.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

Somewhat from the certification authority industry there was a push to improve the integrity and reduce the cost of SSL domain name certification process, a proposal was that domain name owners register a public key when they registered the domain name. Future communcation with the domain name infrastructure would be digitally signed and the domain name infrastructure would validate the digital signature with the on-file public key (note: no digital certificates involved).

So SSL domain name certificate applicants would now digital sign their application. The certification authorities then can do a real-time retrieval of the on-file public key (from the domain name infrastructure) to validate the digital signature on the SSL domain name certificate application. This turns a costly, complex, and error-prone identification process into a much simpler, cheaper and less error-prone authentication process.

It does have sort of a catch-22 for the certificaiton authority industry

1) improving the integrity of the domain name infrastructure for the certification authority industry, improves the integrity for everybody, mitigating the original justification for SSL domain name certificates in the first place.

2) if the certification authority industry can do real-time retrieval of on-file public keys, then supposedly so could everybody else ... eliminating the requirement for public key based infrastructure based on stale, static, digital certificates. It would be possible to simplify all the SSL handshake digital certificate related protocol chatter at the start... and substitute a request to the domain name infrastructure that returned the public key in the same transaction that returned the domain name to ip-address response transaction.

for some topic drift ... the very latest, fresh off the press domain name infrastructure RFCs

https://www.garlic.com/~lynn/rfcidx13.htm#4035
4035 PS
Protocol Modifications for the DNS Security Extensions, Arends R., Austein R., Larson M., Massey D., Rose S., 2005/03/25 (53pp) (.txt=130589) (Updates 1034, 1035, 2136, 2181, 2308, 3007, 3225, 3226, 3597) (See Also 4033, 4034) (Refs 1034, 1035, 1122, 2181, 2308, 2460, 2535, 3007, 3226, 3655)


...>https://www.garlic.com/~lynn/rfcidx13.htm#4034
4034 PS
Resource Records for the DNS Security Extensions, Arends R., Austein R., Larson M., Massey D., Rose S., 2005/03/25 (29pp) (.txt=63879) (Updates 1034, 1035, 2136, 2181, 2308, 3007, 3225, 3226, 3597) (See Also 4033, 4035) (Refs 1034, 1035, 1982, 2181, 2308, 2535, 2536, 2537, 2539, 2930, 2931, 3110, 3445, 3548, 3597, 3658, 3755, 3757, 3845)


...>https://www.garlic.com/~lynn/rfcidx13.htm#4033
4033 PS
DNS Security Introduction and Requirements, Arends R., Austein R., Larson M., Massey D., Rose S., 2005/03/25 (21pp) (.txt=52445) (Obsoletes 2535, 3008, 3090, 3445, 3655, 3658, 3755, 3757, 3845) (Updates 1034, 1035, 2136, 2181, 2308, 3007, 3225, 3226, 3597) (See Also 4034, 4035) (Refs 1034, 1035, 2136, 2181, 2308, 2535, 2538, 2845, 2931, 3007, 3008, 3090, 3226, 3445, 3597, 3655, 3658, 3755, 3757, 3833, 3845) (DNS-SECEXT) (DNSSEC)


from rfc index
https://www.garlic.com/~lynn/rfcietff.htm

as always, clicking on the ".txt=nnnn" field retrieves the actual RFC

An analogy for simple email authentication would be that when somebody signed up with an ISP, they registered a public key ... which then goes into the ISP's userid repository. All email is then digitally signed ... and the recipient just has to contact the sender's ISP in real time to get the sender's on-file public key. No digital certificates are required (which were originall created in the early 80s to address the offline email scenario in situations where the two parties had no prior contact and no other recourse to obtain information about parties that they were dealing with for the first time).

There is no costly, complex and error-prone identification and certification processes needed to proove that the person owning the email account also owns a particular public/private key pair ... if the public key registration is integrated into the business process of creating an userid.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Using the Cache to Change the Width of Memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using the Cache to Change the Width of Memory
Newsgroups: alt.folklore.computers
Date: Sun, 03 Apr 2005 10:54:55 -0600
jsavard writes:
I was *fairly* sure that many DEC computers were connected to the Internet; I remember the old map of USENET that spelled it |u|s|e|n|e|t| ... you know, just like the DEC trademark.

And, at the University I went to, there was apparently a room on one floor were they had a PDP-11 that was connected to the Internet. (The big computer that University used was an Amdahl 470 V-6, which replaced a System/360 Model 67, in case you're wondering.)

But many big computers weren't connected to anything; they did their work and had no need to connect to the world, so I was hesitant to make a categorical statement.


prior to the 1/1/83 conversion to internetworking protocol ... the arpanet was much more of a traditional, homogeneous networking setup ... that had IMPs as the interface to the arpanet ... and hosts using host protocol to talk to the IMPs (and the IMPs talked to the network).

one of my assertions about the internal network being larger than the arpanet/internet for just about the whole period until approx. mid-85 was that the internal network had effectively a gateway type functioin in most of the nodes from the beginning
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... which the arpanet/internet didn't get until the 1/1/83 conversion.

misc. other archived posts
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/subnetwork.html#internet

there was also bitnet/csnet which used similar technology to that used in the internal network ... but was used for an academic network. this academic network (separate from the internal network using similar technology) was comparable in size to the internet in the early to mid 80s.
https://www.garlic.com/~lynn/subnetwork.html#bitnet

misc. postings related to interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

and stuff on NSFNET-1 backbone from the late '80s ... which is much more the deployment and operational infrastructure for the foundation of what was to become the "internet" that emerged in the early 90s.
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Using the Cache to Change the Width of Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using the Cache to Change the Width of Memory
Newsgroups: alt.folklore.computers
Date: Sun, 03 Apr 2005 12:13:58 -0600
Brian Inglis writes:
Big machines have always been heavily connected to the real world, as you say, but not always interconnected to other big machines, as that was not a priority for most vendors in those days.

aka airline res systems driving several hundred thousand reservation terminals ... or financial transaction systems driving several hundred thousand ATM or POS (point-of-sale) machines ... or system that does configuration downloads for millions of cable settop boxes.

the airline res system would tend to have a connection to financial transaction system for things like credit/debit payments ... although the airline res system might be simulating a POS terminal to the financial transaction system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Sun, 03 Apr 2005 14:12:41 -0600
Ron Hunter writes:
Probably, in the case of the US system, as an example of how NOT to do it. Sadly, the state of education in the elementary and secondary schools isn't up to world standards. I am amazed the universities manage to maintain good world standing.

they get so many foreign students.

there were various reports in the early 90s (after the 1990 census) 1) half the 18 year olds in the US being functionally illiterate, 2) half the technical PHDs from cal. univs. were to foreign born students (i.e. foreign workers provided a lot of the expertise that made the internet boom/bubble possible), 3) one large mid-western land grant univ. noted that they had dumbed down entering freshman texts three times since the 1960s, 4) doing some recruiting at cal. univ., all the 4.0 students were foreign born.

....

note that in mozilla after i get 250 or so open tabs ... things get can rather sluggish. i find that i like scanning news stories w/o a lot of the internet delay of click at a time ... so i have a bookmark folder with 120 news sites that i get started and go off and do something else. when its done, i can quickly scan a page at a time ... deleting tabs as i go along and also clicking on interesting story details (that get opened in new background tabs). after i've scanned the first 100 or so news sites, sometimes i have 200 news stories waiting in background tabs ... and clicking on new URLs (for opening in new background tab) can get really sluggish (i'm not waiting on mozilla to fetch the new tab, i'm waiting for mozilla to respond at all after i've clicked).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Sun, 03 Apr 2005 22:31:51 -0600
Ron Hunter writes:
Try dividing that into 3 bookmarks of 40 news sites. YOu can't read but ONE at a time, can you?

it isn't the 120 news site URLs that are the problem ... it is while scanning the 120 ... that i also click on another 200 or so URLs. the issue is while i can only read one at a time ... i would like to batch the latencies involving in obtaining the page ... and then be able to quickly transition from one tab to the next with no latency.

it is nice to have the 120 done in one batch ... i can go off and do something else while they are downloading (breaking it into three sets just means i have to do more manual scheduling).

if i wanted to partition ... i could process 40 or so news sites at a time ... and then go to the rightmost new news story URL and read them in reverse order ... until i read all new news stories and was back at the last news site ... and then scan the next 40 from the right. the issue even then is if i start at the most recently clicked news story ... it might not yet be downloaded ... and i experience a latency. it is fairly straight forward to jump to first or last ... but to find a tab someplace in the middle takes manual effort.

of course it isn't a problem unless i get something like 250 tabs total ... on slow news days ... that isn't likely to happen.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Mon, 04 Apr 2005 00:20:33 -0600
CBFalconer writes:
If I understand you correctly, that is exactly what Netscape 4.7x does when told to sychronize newsgroups. ALT-FLSY does it.

i have a bookmark folder that has 120 URLs of news sites (like google news site, ms news site, etc).

i click on the folder and bring up the 120 URLs in 120 different tabs. this takes a while, so i go do something else.

i then can quickly browse the news sites tabs (deleting tabs as i finish them) ... clicking on interesting looking detailed news stories ... which are brought up in background tabs. when i've finished all 120 news site tabs ... i now have some number of detailed news stories in (new) tabs ... which i can read.

periodically when i'm down to 20-40 remaining (original bookmark folder) tab news sites ... i might have over 200 (new) detailed news stories in new tabs. i find that mozilla gets sluggish sometimes when i have 250 or so tabs (just clicking on a URL may take a couple seconds before mozilla is ready to accept any new controls ... like delete the current tab).

part of the issue is that i can be reading other news tabs while new news stories are being fetched in background tabs (fetch latency is masked since i can be reading something else).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TLS-certificates and interoperability-issues sendmail/Exchange/postfix

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS-certificates and interoperability-issues sendmail/Exchange/postfix ..
Newsgroups: comp.security.unix,comp.mail.misc,comp.mail.sendmail
Date: Mon, 04 Apr 2005 09:27:40 -0600
Mike Scott writes:
However, I must be missing something in your closing paragraph. Further up your posting, aiui you talk about detecting possible subversion of the domain name mechanism. Yet here you are surely suggesting relying on just that mechanism to obtain the security information required to check the same. In other words, *if* my isp's domain has been subverted, a false public key could be supplied by the subverter which a recipient would then use to validate a corresponding falsely signed message purporting to be from someone registered at that isp.

the issue was that portions of the domain name infrastructure information might become subverted. this gave rise to one of the requirements for the SSL domain name certificate.

however, the process of getting the SSL domain name certificate is that the certification authority has to check with the domain name infrastructure as to the true owner of the domain name and is it the same entity that is applying for the SSL domain name certificate.

in essense the SSL domain name certificate certification is also dependent on the very same domain name infrastructure that everybody else is dependent on ... it is just significantly obfuscated behind a lot of other business processes related to SSL domain name PKIs and certification, etc. NOTE: the base trust anchor (authoritative agency for the information that the certification authority is certifying) is the same regardless of whether you are dealing directly with the trust anchor or the dependency on the trust anchor is obfuscated behind a whole lot of PKI mumo jumbo; a certification authority, aka CA, has to rely on the authoritative agency for the information being certified in the process of certifying that information.

that was the reference to the catch-22 for the SSL domain name certification authority industry with improving the integrity of the authoritative agency for domain names (i.e. the domain name infrastructure). Improving the integrity of the authoritative agency for domain names could eliminate much of the requirement for needing SSL domain name certificates.

The issue is that if the authoritative agency for the information has integrity problems ... then all the organizations and entities that are dependent on that agency for the integrity of the information are at risk. In the case of domain name infrastructure and integrity of domain name ownership information, that includes the SSL domain name certification authority industry. It is just that dependency is obfuscated behind a lot of other certification authority (CA) business processes (i.e. if the authoritative agency has information integrity problems, then it is also possible to corrupt the information and then apply for a certificate ... which is only the business process of certifying the specific information with the authoritative agency responsible for the information).

long ago and far away, we were asked to work with this small client/server company that was interested in do payments on the server. in the year that we worked with them, they moved from to mountain view and changed their name from mosaic to netscape. it is now commonly referred to as e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

they wanted to use this stuff called SSL. as part of working with them, we eventually had to do audits and detailed business process walk thrus of the main organizations that were calling themselves certification authorities (including looking at the trust anchors from which they derived their information for certification).

one of the side-points was that several of these organization commented that they thot it was originally going to be technically oriented but they were quickly finding out that it was a service business that 95-plus percent of the activity was administrative and business process related (and technology was almost a side issue).

we had over 20 years industry strength data processing at the time ... and had recently come off running a project/product called HA/CMP
https://www.garlic.com/~lynn/subtopic.html#hacmp

where we dealt extensively with all possible failure modes. Also as noted, two of the people that we found responsible for the commerce server at this small client/server startup ... we had worked with on a couple years earlier on ha/cmp ... and as previously mentioned were also in this ha/cmp meeting
https://www.garlic.com/~lynn/95.html#13

another side-point was that many of the people in these things called certification authorities had come from technical arena and didn't necessarily have a lot of experience with industrial strength dataprocessing related to service oriented business. We pointed out a number of issues that were common in industrial strength dataprocessing in service oriented businesses.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Mon, 04 Apr 2005 08:59:23 -0600
"Tom Linden" writes:
Argumentum de verecundiam doesn't work here. I recall whan this fashion arose and there were an abundance of hucksters selling their wares (or smoke) to the corporate world. Use of GOTO's is neither good nor bad style and I can assure you that it does not

in the early 70s, i wrote a pli program that analysed (360) assembler listings ... creating abstract representation of instructions and doing detailed code flow analysis (as well as attempting to recognize register use before set scenarios ... aka are there code paths that failed to initialize a particular register needed later ... analogous to uninitialized variables).

it also attempted to represent the program in a pli-like psuedo syntax. branches translated into GOTOs ... but it also attempted to capture conditional branching ... and tranlate it into higher level conditional structions (aka while, if/then/else, do/until, etc).

on of the issues was that some of the branching logic ... which seemed moderately straight forward would translate into quite obtuse nested if/then/else structures 10 or more levels deep.

as referenced previously in the C-language thread ... while C-language programs tend to have a high proportion of buffer overflow failures (related to buffer constructs not carrying explicit length semantics), 360 assembler code tended to have a high proportion of uninitialized (and/or incorrectly initialized) register failures ... aka register content management
https://www.garlic.com/~lynn/2005b.html#16 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns

most of these (register content) failures tended to be associated with anomolous and/or low frequency code paths. the issue with GOTOs, in attempting to do a "forensic" code path reconstruction, is reconstruction of the actual code path followed. explicit conditional constructs tend to make specific code path instance reconstruction easier in specific failure scenarios.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 04 Apr 2005 09:49:03 -0600
sarr@news.itd.umich.edu (Sarr J. Blumson) writes:
My memory on this subject is particularly unreliable, but I believe the 165 had virtual memory hardware but the 85 did not.

165-ii was a field hardware retrofit of virtual memory hardware to 165s currently in the field ... and it was a significant effort.

system/370 "red book" was the 370 architecture superset of the 370 principles of operation. it was done in cms script (document formating) with conditionals .... misc. references to cms done at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
and script and gml
https://www.garlic.com/~lynn/submain.html#sgml

when conditionals were set for printing for the 370 principle of operations ... it left out lots of unannounced stuff ... as well as all kinds of engineering and other details. there were a number of features in the original 370 virtual memory architecture that were never announced.

there was an escallation meeting in POK where the 165 engineers said that they could do a subset of the (virtual memory) a lot faster than doing everything in the architecture (which would take an additional six months). It was eventually decided to do the subset implementation that could be done six months faster by the 165 engineers. Among the things that got left out were new memory (segment, page) r/o protection and some of the selective invalidate commands (aka in addition to PTLB, there was ISTO, ISTE, IPTE).

Bits and pieces of some of the unannounced 370 virtual memory support did leak out in later years. however, at the time, the change resulted in the other 370 products (that had already implemented the full 370 virtual memory support) having to go back and remove the extra stuff that the 165 wasn't going to do.

the transition from cp67/cms to vm370/cms ... there was implementation that would use the new 370 segment protection feature with the cms shared segment support. with the dropping of the r/o 370 virtual memory protection support, cms had to revert to a kludge for maintaining integrity of shared segments across multiple different virtual address spaces.

the original cms shared segment support was a single segment that was part of the cms kernel and had this really hacked kludge for protection. i had converted a bunch of virtual memory management stuff from cp67 to vm370 (that had never been released in cp67, including page mapped file system and a lot more extensive shared segment capability). The vm370 group picked up a small subset of these shared segment changes (and none of the page mapped file system) for vm370 release 3 ... and called it DCSS. misc. posts related to cms shared segments, page mapped filesystem, etc
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

past posts mentioning 370 architecture red book
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2001m.html#39 serialization from the 370 architecture "red-book"
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003k.html#45 text character based diagrams in technical documentation
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#51 [OT] Lockheed puts F-16 manuals online
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004k.html#45 August 23, 1957
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#25 360POO

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Mon, 04 Apr 2005 10:19:50 -0600
Ron Hunter writes:
Nope. Load the first 40, read half of them, then start the second 40, ditch the first 40, then when you have read half of them, start downloading the last 40. BTW, with all that news reading, HOW do you find time to eat and sleep?

if you eliminate the network latencies ... the whole process ... except for reading the detailed stories takes around 30 minutes. reading the detailed stories varies all over the place from quick skim to more detailed examination ... but rarely more than another 30 minutes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Mon, 04 Apr 2005 10:33:19 -0600
... there is still the slight problem that i haven't found a good tab indexing scheme.

the problem is that you are just skimming the news sites looking for news stories to click on. the new news stories get loading into new tabs as you go along.

i have tabs 1-40 with news sites ... by the time i've skimmed (and deleted) the first 20 news sites ... i have tabs

• 1-20 as the remaining of the first 40
• 21-M as the detailed news stories
• M+1-M+40 become the next batch of 40 news sites

when i've finished the remaining 20 news sites i have

• 1-M detailed news stores
• M+1-M+40 the 2nd batch of 40 news sties
• M+41-M+41+N the most recent detailed news stories clicked on

the problem is that while i don't experience the mozilla sluggish because i've got 250 tabs and clicking on one additional slows down mozilla response (while it is updating stuff) ... mozilla is somewhat sluggish with possibly 40 active tabs and batching processing an additional 40 tabs in the background. However, the sluggish behavior is batched on the one click that loads the next 40 ... instead of happening on every click.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Mon, 04 Apr 2005 11:16:33 -0600
Andrew Swallow writes:
Did it approximate to something like?

IF condition1 THEN Report Error1; BREAK;
OR IF condition2 THEN Report Error2; BREAK;
OR IF condition3 THEN Report Error3; BREAK;
OR IF condition4 THEN Report Error4; BREAK;
OR IF condition5 THEN Report Error5; BREAK;
ELSE

The work

ENDIF


more like complex tree structure ... say half dozen or more conditions with arbitrary processing related to any specific condition and/or combination of conditions. earlier processing might jump to an arbitrary place in the tree (making arbitrary mesh connections between various points in the tree).

some of this was extremely high use kernel code that had been highly tuned to eliminate every possible superfluous cycle.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 04 Apr 2005 11:39:55 -0600
Anne & Lynn Wheeler writes:
165-ii was a field hardware retrofit of virtual memory hardware to 165s currently in the field ... and it was a significant effort.

aka initial 370 announce and ship didn't have virtual memory support ... virtual memory was announced later and there had to be hardware retrofit for 155s and 165s

the only 360 with virtual memory support was the 360/67 (except for the specially modified cambridge 360/40) which had both 24-bit and 32-bit virtual memory address options.

when 370 virtual memory was announced it only had 24-bit addressing. it wasn't until 370-xa on the 3081 that you saw more than 24-bit (except it was 31-bit, not 32-bit that had been available on the 360/67).

cambirdge had wanted to add special virtual memory hardware to 360/50 ... but there weren't any spare 50s (the spare 50s were all going to the faa air traffic control system effort) and so had to settle on modifying a 360/40. the built cp/40 for this machine and later when 360/67 became available, morphed into cp/67.

some of this is also in melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist

random past posts mentioning cp/40:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/94.html#37 SIE instruction (S/390)
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/98.html#33 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/99.html#139 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#142 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000c.html#42 Domainatrix - the final word
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000e.html#16 First OS with 'User' concept?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#81 Ux's good points.
https://www.garlic.com/~lynn/2000.html#82 Ux's good points.
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#46 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#8 TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#30 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#36 Blade architectures
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002h.html#62 history of CMS
https://www.garlic.com/~lynn/2002h.html#70 history of CMS
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002l.html#22 Computer Architectures
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003f.html#2 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003g.html#31 Lisp Machines
https://www.garlic.com/~lynn/2003g.html#33 price ov IBM virtual address box??
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#24 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#16 OSI not quite dead yet
https://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#36 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003o.html#32 who invented the "popup" ?
https://www.garlic.com/~lynn/2003o.html#47 Funny Micro$oft patent
https://www.garlic.com/~lynn/2004b.html#0 Is DOS unix?
https://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb. 1964
https://www.garlic.com/~lynn/2004c.html#25 More complex operations now a better choice?
https://www.garlic.com/~lynn/2004f.html#17 IBM 7094 Emulator - An historic moment?
https://www.garlic.com/~lynn/2004f.html#63 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#48 Hercules
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004h.html#34 Which Monitor Would You Pick??????
https://www.garlic.com/~lynn/2004.html#45 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004n.html#3 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#25 Shipwrecks
https://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Mon, 04 Apr 2005 13:05:57 -0600
Jay Garcia writes:
Almost as bad as users posting with no Caps ... gawdawful grammar. :-(

in the early 80s, there was a researcher that sat in the back of my office for something like nine months (and went to a lot of meetings) taking notes on how i communicated. they also had access to all my incoming and outgoing email as well as logs of all my instant messages. this was published as a research report and served as a thesis for a stanford phd thesis (joint between language and computer ai depts), as well as follow-on papers and books. there were various statistics, like i avg'ed. email communication with 275-plus different people per week for the 9 month period of the study (i've mellowed significantly in the past 20+ years). besides the caps issue there were various other issues with my computer-based communication style.

the caps issue recently came up in a mailing list ... and i now have something more of an excuse ... a couple weeks ago, i spilled liquid on my laptop keyboard and now my caps lock and left-shift key aren't working.

misc. past posts related to the stanford phd thesis research and commputer mediated communication in general
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 04 Apr 2005 14:30:12 -0600
Eric Smith writes:
The 165 did not have dynamic address translation, thus no virtual memory. The 168 was essentially a 165 with DAT. The 165 could be upgraded (at huge expense) to add the DAT, becoming a 165-3.

the 155 and 165 had cache and 2mic(?) memory ... and you could get a field retro-fit of virtual memory hardware.

the 158 & 168 had cache and something like 500ns (480?) memory ... so cache misses did a lot better. they also came with virtual memory support as standard. i worked with one of the 165 engineers on VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

i remember him saying that the 165 avg. something like 2.1 machine cycles per 370 instruction and they improved that for 168 to an avg. of 1.6 machine cycles per 370 instruction.

the 168-1 to 168-3 transition involved doubling the cache size from 32k to 64k. it turned at that they were using the 2k bit to index the extra cache lines (trying to some of the indexing bits that were the same whether it was a virtual or real address). this caused a performance degradation running any of the 2k-page operating systems (vs1, dos/vs) under vm on 168-3. in virtual 2k page mode the 168-3 ran with half the cache (essentially reverted to 168-1). The problem with running under VM would be that every time you entered the vm kernel, it would switch to 4k page mode ... which flushed and reset the complete cache ... and then switching back to 2k page mode with shadow page tables again flushed and reset the complete cache. as a result, running a 168-3 in these environments was actually much worse than running with a 168-1 ... since the cache flush and reset (with constant switch between 2k & 4k page modes) was causing a lot of additional overing

158 was microcoded machine with integrated channels ... aka the native engine was shared between executing the microcode for channel operation and the microcode for 370 processor operation.

for the 303x follow-on they introduced a "channel director". the channel director was basically a 158 native engine running only the 158 integrated channel microcode (and no 370 microcode). A 3031 was a 158 native engine running only the 370 microcode (and no integrated channel microcode) and reconfigured to work in conjunction with a channel director (in effect two processor shared memory .... but the engines were running completely different m'code). The 3032 was a 168 reconfigured to work with channel director. A 3033 started out effectively being the 168 wiring diagram remapped to faster technology (and configured to work with channel directors).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Mon, 04 Apr 2005 14:43:45 -0600
Ron Hunter writes:
Perhaps you get into 'paging'. How much ram do you have in the computer?

i have 3.4mhz intel multithreaded processor with 4gbytes of memory running fc3 (although fc3 claims it is only 3.5gbytes). the problem occurs when there is only about 800mbytes of total allocated memory (over a 3rd of it, sometimes half of it, belonging to mozilla). no paging is occurring. i've also have a 3.4mhz system with 1gbyte of memory that exhibits a similar problem (and also doesn't page). However, i have a dual 400mhz processor machine with only 512mbytes of memory and this can create a paging problem for it.

i have all scripting (java and javascript) and popups disabled. when mozilla starts getting sluggish ... a popup warning frequently also appears about some script not responding and should it be canceled or continue. it acts more like there are some non-linear paths managing large number of tabs (cpu meter pegs) which it gets into when starting a new tab. It also is URL/site dependent ... there are quite a few URLs that clicking on them for a new background tab ... won't exhibit the characteristic.

for some total topic drift ... i originated clock like virtual memory page replacement algorithms as an undergraduate nearly 40 years ago ... random past posts on paging and page replacement algorithms:
https://www.garlic.com/~lynn/subtopic.html#wsclock

and did a lot of paging pathlengths and other resource scheduling algorithms
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general
Date: Mon, 04 Apr 2005 14:48:28 -0600
Ron Hunter writes:
Humm. Two hundred stories in 30 minutes? You must read a bit faster than I do. If each story is only 200 words then to do that in 30 minutes, you would be reading 1333Wpm. You Evelyn Woods or something?

i'm scanning/scanning 120 news sites in 30 minutes and clicking on interesting new stories ... doing this once a day ... many news sites will show URL history as different color and/or date their story titles.

so i then have 200 or so new URLs with stories. Sometimes i can decide to kill the tab after reading the first couple sentences (as variation on something already read or really not interesting). That gets me the other 30 minutes. Then it may take more than that to examine really interesting stories. It is the filtering/skimming process.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TLS-certificates and interoperability-issues sendmail/Exchange/postfix

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS-certificates and interoperability-issues sendmail/Exchange/postfix ..
Newsgroups: comp.security.unix,comp.mail.misc,comp.mail.sendmail
Date: Mon, 04 Apr 2005 15:31:07 -0600
ref:
https://www.garlic.com/~lynn/2005e.html#45
https://www.garlic.com/~lynn/2005e.html#51

one abstraction for digital certificates is that they represent a high integrity stale, static container of some distributed information that is a duplicate of some authoritative information (at the time the certificate was created) .... aka basically akin to distributed cached database records ... but with added armoring to survive in a possibly unpredictable and hostile environment over an extended period of time.

frequently the quality of the information contained in a digital certificate is obfuscated with a lot of details about the armoring that went into the digital certificate container. the armoring of the digital certificate container can be totally unrelated to the quality of the information that went into the armored container. Furthermore, the qaulity of the information that went into the armored container may degrade over time .... effectively becoming worse than useless ... regardless of the quality of the armoring that is involved in constructing the digital certificate container.

furthermore, these digital certificate containers were originally fabricated to address a need in a offline world ... where there was no direct access to the actual information ... where relying parties required substitutes for actually accessing the original information in real time i.e. stale, static, digital certificate information copies being marginally better than having no information at all.

to a large extent the original target market segment for armored, stale, static digital certificates (an offline world) has somewhat disappeared with the ubiquitous penetration of internet aided by various wireless technologies.

given a choice between having near-time access to the original information ... vis-a-vis having to make do with stale, static digital certificates copies .... all other things being equal ... most businesses would find near-time access to the original information being of much better value than stale, static, digital certificate copies. The possible exceptions are 1) the dwiddling situation involving the original target market for digital certificates (an offline environment that has no recourse to the actual information) or 2) no-value operations which can't afford the rapidly decreasing costs of having near-time access to the real information.

Restricting digital certificates to the no-value operation market segment can make it difficult to justify the cost of a high integrity infrastructure supporting the certification and operation of a PKI certificate-based operation (people using digital certificates for no-value operations aren't likely to be willing to pay a lot for digital certificates in support of their no-value operations).

Furthermore, once there is any significant value invovled ... near-time access to the actual information is easily justified (as opposed to relying on stale, static digital certificate copies of the information).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Mon, 04 Apr 2005 21:54:05 -0600
Ron Hunter writes:
Sounds more like an excuse for being too lazy to press the shift key to me. Now I have a lot of trouble with the shift key on computers because I learned to type before electric typewriters, and pressing the shift key was a 'ballistics' exercise. You stabbed the key, hard, and then pressed the character key so that when it reached the platen, the carriage had been lifted to the correct height to make the capital letter appear. So, I still expect the shift key to be 'ballistic', which is no longer is.

in my youth, i taught myself how to type on am old resurrected 1930s era typewriter that had seen better days. that wasn't a particular hard shift problem. the hardest time i had with shift key was on tty33.

some people from cambridge brought out a copy of cp67/cms the last week of jan68 to the university to install.
https://www.garlic.com/~lynn/subtopic.html#545tech

cp67 had 2741 and 1052 terminal support. the university was getting tty33s and needed ascii/tty support. the 2741/1052 support was sort of interesting ... it attempted to dynamically determine the type of terminal on each line/port and use the 2702 sad command to associated the appropriate line-scanner with that port.

i thot i was going to be smart and extend the code to also dynamically determine tty (as part of providing generalized tty/ascii terminal support). I actually wanted to have a single dial-up number for all terminals with pool of ports. in early testing, it all seemed to work and then the local ibm engineer pointed out that there was some short-cuts made in the 2702 ... and while any line-scanner could be associated with any port ... the 2702 had specific oscillator hardwired to specific ports (fixing each ports baud rate). it wasn't an issue with 2741 and 1052 support since they operated at the same baud rate ... but it was a problem with tty since it used a different baud rate.

this sort of prompted the university to kick off a project to build their own terminal control unit. basically reverse engineer the ibm channel interface and build our own channel board which was installed in an interdata/3 minicomputer ... programmed to emulate a mainframe controller. one of the things that was done in the interdata was to do high rate strobe of each port to allow dynamic determination of terminal baud rate. later somebody wrote this up making some claim that four of us spawned the mainframe plug-compatible controller business.

minor past posts
https://www.garlic.com/~lynn/submain.html#360pcm

the pcm controller business is given as the justification for the (major) future system (FS) project
https://www.garlic.com/~lynn/submain.html#futuresys

which was eventually killed w/o every being announced (or even most people even knowing that existed).

there is also some assertion that the future system project, in turn motivated the mainframe plug-compatible processor business (mainframe clones) ... minor reference on the subject:
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Graphics on the IBM 2260?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Graphics on the IBM 2260?
Newsgroups: alt.folklore.computers
Date: Mon, 04 Apr 2005 21:34:17 -0600
Eric Smith writes:
In addition to the 2250 Model 4 which attaches to an 1130, there were at least three different models for use on the System/360, and recently I saw a reference to one being built into the 360/9x.

As I understand it (from only a very cursory review of the documentation), the 2250 Model 1 had an integral control unit, which does not appear to be an 1130. The Model 2 and Model 3 used the 2840 Display Control Model 1 and 2, respectively. The 2840 could support two 2250s (optionally expandable to four).

The 2250 Model 4 was used with the 1130. The manual primarily describes operation of the 2250 on the 1130 locally. Brief mention is given to the idea of using the combination as a remote terminal for a 360, but no particular details of such an arrangement are provided.


the university i was at had a 2250m1 (direct channel attach) ... lincoln labs had done a cms fortran graphics labrary for the 2250m1. i took the lincoln labs library and integrated it into cms edit ... to create an early full-screen editor.

the science center had a 2250m4 (i.e. with 1130). somebody at the science center had ported space wars to the 1130 and you could play two-person space war on the thing .... bascially the keyboard was divided in half (with each plaayer getting their half of the keyboard) which had the various controls.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Mon, 04 Apr 2005 22:00:13 -0600
or maybe it was a case of arrested development ... when i was first doing compuaters at the university it was with punch cards, first an 026 keypunch and later 029 keypunch and there wasn't any use of case.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Mon, 04 Apr 2005 22:16:00 -0600
Ron Hunter writes:
It may be that you are exceeding some internal limit requiring some error handling. I am sure that you can get the same effect you have with those 250 tabs some other way. You didn't mention your internet connection speed.

i started doing this when i had 56kbit link ... and wanted to really mask url download latencies. the (then) 80 url/tabs could be batched when i was doing something else ... then i had 80 tabs that were on the local machine ... i would click on interesting news stories but they would be spawned into background tabs and i could continue processing the first 100 tabs (that had been downloaded as a batch while i was out getting coffee). when i had finished the first 100, some number of the background tab loading had completed ... and i didn't have to wait for them either. it was when the number of initial tabs got to 120 (and the process might peak at 250-260 tabs) that it seemed to be hitting some internal mozilla processing issue.

or this is like the doctor joke
patient: it hurts when i do this doctor: then stop doing it

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, next, index - home