List of Archived Posts

2005 Newsgroup Postings (07/05 - 07/27)

simple question about certificate chains
Creating certs for others (without their private keys)
IBM 5100 luggable computer with APL
IBM 5100 luggable computer with APL
[newbie] Ancient version of Unix under vm/370
Globus/GSI versus Kerberos
Creating certs for others (without their private keys)
[newbie] Ancient version of Unix under vm/370
IBM's mini computers--lack thereof
IBM's mini computers--lack thereof
Cost: Teletype 33 vs. IBM Selectric Terminal (2741?)
Question about authentication protocols
IBM's mini computers--lack thereof
IBM's mini computers--lack thereof
IBM's mini computers--lack thereof
Course 2821; how this will help for CISSP exam ?
CPU time and system load
Another - Another One Bites the Dust
S/MIME Certificates from External CA
Implementation of boolean types
simd for 390(or z990)?
Old Computers and Moisture don't mix - fairly OT
Old Computers and Moisture don't mix - fairly OT
Old Computers and Moisture don't mix - fairly OT
Old Computers and Moisture don't mix - fairly OT
IBM's mini computers--lack thereof
Code density and performance?
how do i encrypt outgoing email
IBM's mini computers--lack thereof
using ssl news servers
Massive i/o
using ssl news servers
IBM's mini computers--lack thereof
Massive i/o
IBM's mini computers--lack thereof
IBM's mini computers--lack thereof
Massive i/o
public key authentication
Massive i/o
Massive i/o
capacity of largest drive
IBM's mini computers--lack thereof
public key authentication
Code density and performance?
TLAs - was summit else entirely
Digital ID
IBM's mini computers--lack thereof
Code density and performance?
Code density and performance?
IBM's mini computers--lack thereof
Cluster computing drawbacks
Cluster computing drawbacks
Cluster computing drawbacks
Barcode Email
Barcode Email
54 Processors?

simple question about certificate chains

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: simple question about certificate chains
Newsgroups: alt.computer.security,comp.security.ssh,comp.lang.java.security
Date: 05 Jul 2005 11:28:49 -0600
"Richard E. Silverman" writes:
No, browsers generally do *not* do this, for several reasons. The most obvious is that, since the DNS is insecure, it would be easy to get a client to incorrectly accept a certificate by simply spoofing its DNS traffic. Browsers should (and generally do) match the certificate against what the user types, nothing else -- that's the point, to verify that you're connecting to the site you intended. It's also why aliases don't work -- for example, suppose a site's certificate says www.foo.com, but the server is reachable at foo.com as well. If you connect to https://foo.com/ the browser will give a warning; it's up to you to decide that a certificate with the name "www.foo.com" just as acceptable in this case.

the foo.com vis-a-vis www.foo.com was early problem ... and then they went to wildcard certificates and wildcard browser processing ... so that browser would get a *.foo.com ssl domain name certificate and it would accept for all URLs ending in foo.com.

slightly related posting in sci.crypt
https://www.garlic.com/~lynn/2005l.html#19

what you typed in is matched against the site you are dealing with.

the original major application was e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

the problem was that a lot of the e-commerse sites found that SSL reduced their capacity by 80-90 percent. as a result, most sites went to not invoking https/ssl until you hit the checkout/pay button.

the vulnerability is that one of the SSL objectives was countermeasure against man-in-the-middle &/or impersonation attacks. if you happen to be dealing with a bogus site w/o SSL (because nobody uses SSL until the checkout/pay phase) ... and then you get to the checkout/pay button ... it is highly probable that any bogus site will supply a URL as part of the pay button (which you haven't typed in) that corresponds to some valid SSL domain name certificate that they actually posses.

there is actually a funny catch-22.

one of the SSL justifications was as a countermeasure to perceived integrity problems in the domain name infrastructure.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

however, when somebody applies for an SSL domain name server certificate, the certification authority must check with the authoritative agency for domain name ownership. this turns out to be the same domain name infrastructure that has the integrity issues giving rise to the requirement for SSL domain name certificates.

basically the certification authority asks for a lot of identification information so that it can go through the complex, expensive and error-prone process of matching the applicants supplied identification information with the identification information on file for the domain name owner at the domain name infrastructure.

so somewhat with the backing of the certification authority industry, there has been a proposal to improve the integrity of the domain name infrastructure by having domain name owners registering a public key with the domain name infrastructure. An objective is improving the integrity of the domain name infrastructure by having all communication from the domain name owner be digitally signed ... which the domain name infrastructure can authenticate with the on-file public key (having all communication authenticated, improves the integrity of the domain name infrastructure, which in turns improves the integrity of the checking done by the certification authorities).

As an aside observtion ... this on-file public key results in a certificate-less, digital signature operation.
https://www.garlic.com/~lynn/subpubkey.html#certless

The other issue for the certification authority industry, is that they can now require that SSL domain name certificate applications also be digitally signed. Then the certification authority can retrieve the on-file public key from the domain name infrastructure to authenticate the digital signature on the application. This turns an expensive, complex, and error-prone identification process into a much less expensive, straight-forward and more reliable authentication process.

The problem (for the CA industry) is that the real trust ruot for the SSL domain name certificates is the integrity of the ownership information on file with the domain name infrastructure. Improving this trust root, in support of certifying SSL domain name certificates ... also improves the overall integrity of the domain name infrastructure. This, in turns minimizes the original integrity concerns which gave rise to needing SSL domain name certificates.

The other problem (for the CA industry), if they can retrieve on-file trusted public keys from the domain name infrastructure, it is possible that everybody in the world could also retrieve on-file public keys from the domain name infrastructure. One could then imagine a variation on the SSL protocol ... where rather than using SSL domain name certificates (for obtaining a website's public key), the digital certificate was eliminated and the website's public key was retrieved directly.

In fact, a highly optimized transaction might obtain the website ip-address piggybacked with the website public key in a single message exchange (eliminating much of the SSL protocol chatter gorp that goes on).

In that sense, such an SSL implementation, using on-file public keys starts to look a lot more like SSH implementation that uses on-file public keys (eliminating the need for digital certificates, PKIs and certification authorities all together).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Creating certs for others (without their private keys)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: mailing.openssl.users
Subject: Re: Creating certs for others (without their private keys)
Date: Tue, 05 Jul 2005 15:26:37 -0700
Uri wrote:
Darn, I thought I explained the problem: openssl "req" seems to require private key of the cert requestor, which defeats the whole idea of PKI. Here's the excerpt of the HOWTO you're referring me to. It is not helpful, sorry - for the above reason (private key necessary). The certificate request is created like this:

openssl req -new -key privkey.pem -out cert.csr


typically the business practices of a certification authority (CA) for issuing a digital certificate (some characteristics bound to a public key and digitally signed) requires that the CA establish that the requesting party has possession of the private key that corresponds to the public key in the application.

basically a digital signature is the private key encoding of a hash of some message or data. the recipient rehashes the same message/data, decodes the digital signature with the indicated public key (giving the original hash) and compares the two hashes. if they are equal, the recipient can assume

1) the message hasn't been modified since signing 2) something you have authentication

aka, in 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


... where the digital signature verification implies that the originator had access and use of the corresponding private key (w/o having to supply the private key).

the technology is asymmetrical cryptography where there is a pair of keys, and what one key encodes, the other key decodes.

there is a business process called public key where one of the key pair is leabeled public and made freely available. the other of the keypairs is labeled private, kept confidential and never divulged.

acquiring a certificate frequently involves filling out a form that looks similar to a real certificate and digitally signing it (with your private key) and sending it off. the certification authority then verifies the digital signature with the public key included in the application (this should at least indicate that the applicant has the corresponding private key). the certification authority then verifies (certifies) the provided information ... generates a digital certificate (possibly in the identical format as the application) but digitally signs it with their private key.

now once the key owner has the digital certificate, the owner (and/or others) may be able to distribute the digital certificate all over the world.

one of the typical problems with the PKI & certification authority business model .... is that the party benefiting is the relying party who uses the digital certificate to obtain a public key to verify somebody's digital signature. Typically the person purchasing/buying the digital certificate from a certication authority is the key owner ... not the benefitting/relying party.

in typical business process operation, the benefitting/relying party is buying/paying for the certified information ... which tends to form a contractual relationship between the party responsible for the certification and the party benefiting from the certification. This has led to situations like the federal GSA PKI project .... where GSA has direct contracts with the certification authorities ... creating some sort of legal obligation between the federal gov. (as a relying/benefiting party) and the certification authorities (even when the certification authorities are selling their digital certificates to the key owners ... not to the benefiting/relying parties).

note that their is no actual requirement that a certification authority needs to have evidence of the key owners private key (aka implied by verifying a digital signature) .... it is just part of some certification authorities business practices statement.

There was a possible opening here in the mid-90s. Certification authorities in the early 90s, had been looking at issuing x.509 identity certificates grossly overloaded with personal information. One of the things defined for these digital certificates was a bit called a non-repudiation bit. In theory, this magically turned any documents (with an attached digital signature which could be verified with a a public key from a non-repudiation digital certificate) into a human signature.

This is possibly because of some semantic ambiquity since human signature and digital signature both contains the word signature. The definition of a digital signature is that the associated verification can imply

• message hasn't been modified
something you have authentication

while a human signature typically implies read, understood, agrees, approves, and/or authorizes. The supposed logic was if a relying party could produce a message that had a digital signature and a digital certificate w/o the non-repudiation bit ... then it was a pure authentication operation. However, if the relying party could find and produce a digital certificate for the same public key that happened to have the non-repudiation bit turned on, then the digital signature took on the additional magical properties of read, understood, agrees, approves, and/or authorizes the contents of the message.

this logic somewhat gave rise to my observation about dual-use attack on PKI infrastructures. A lot of public key authentication operations involve the server sending some random data ... and the recipient digitally signing the random data (w/o ever looking at the contents) and returning the digital signature. The server than can authenticate the entity by verifying the digital signature. However, there is no implication that the entity has read, understood, agrees, approves, and/or authorizes the random data.

An attacker just sends some valid document in lieu of random data and is also able to produce any valid digital certificate for the associated public key that happens to have the non-repudiation bit set (whether or not the signing entity happened to include such a certificate for that particular operation or not). The victim digitally signs the supposed random data (w/o ever looking at it) and returns the digital signature (along with a digital certificate w/o the non-repudication bit set). The attacker however, now has a valid document, a valid digital signature and a valid digital certificate with the non-repudiation bit set (obtained from any source).

Somewhat because of the pure fantasy that being able to produce any valid digital certificate (for a valid public key that correctly validates the associated digital signature), with the non-repudiation bit set, magically guarantees read, understood, agrees, approves, and/or authorizes .... the standards definition for the non-repudiation bit has since been significantly depreciated.

slightly related recent posts about SSL domain name certificates
https://www.garlic.com/~lynn/2005m.html#0

misc. past posts on dual-use attack
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards

IBM 5100 luggable computer with APL

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 luggable computer with APL
Newsgroups: alt.folklore.computers
Date: 05 Jul 2005 17:46:32 -0600
"Phil Weldon" writes:
Anyone rember, use, or program an IBM 5100 luggable computer?

palo alto science center ...

i was at cambridge (science center)
https://www.garlic.com/~lynn/subtopic.html#545tech

and then sjr, but did periodic got to work with some palo alto people

here is specific reference:
http://www.cedmagic.com/history/ibm-pc-5100.html

i did do some work with apl &/or hone ... hone was a internal cp/cms time-sharing service that provided world-wide support to all the field, marketing and sales people ... primarily apl applications on cms. starting sometime in the early to mid 70s, salesmen couldn't submit a mainframe related order w/o it first having been run thru a HONE application. for a time, hone had a datacenter across the back parking lot from pasc. misc. apl and/or hone posts:
https://www.garlic.com/~lynn/subtopic.html#hone

misc past 5100 posts
https://www.garlic.com/~lynn/2000d.html#15 APL version in IBM 5100 (Was: Resurrecting the IBM 1130)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#46 A new "Remember when?" period happening right now
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2001b.html#45 First OS?
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#47 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format
https://www.garlic.com/~lynn/2003i.html#79 IBM 5100
https://www.garlic.com/~lynn/2003i.html#82 IBM 5100
https://www.garlic.com/~lynn/2003i.html#84 IBM 5100
https://www.garlic.com/~lynn/2003j.html#0 IBM 5100
https://www.garlic.com/~lynn/2003n.html#6 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2004c.html#8 IBM operating systems and APL
https://www.garlic.com/~lynn/2004l.html#32 Shipwrecks
https://www.garlic.com/~lynn/2005g.html#12 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 5100 luggable computer with APL

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 5100 luggable computer with APL
Newsgroups: alt.folklore.computers
Date: 06 Jul 2005 11:48:30 -0600
CBFalconer writes:
The real introduction of luggability awaited the Osborne, improved in the Kaypro. The other thing that Osborne pioneered was the bundled software package.

i thot that bundled software came much earlier ... the big change was at gov. & legal prompting the great unbundling announcement on 6/23/69 (and starting to charge separately for software)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[newbie] Ancient version of Unix under vm/370

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [newbie] Ancient version of Unix under vm/370
Newsgroups: alt.folklore.computers
Date: 07 Jul 2005 11:40:04 -0600
Renaissance writes:

http://cm.bell-labs.com/cm/cs/who/dmr/otherports/ibm.pdf

Is available an ancient version of unix (licensed for hobbistic free use) that ran as host operating system on the top of a vm/370 virtual machine partition (obviously using the hercules emulator)?


since vm370 just provided virtual machine .... there should be little different between real 370 machine, vm370-based 370 virtual machine or hercules (or other) 370 virtual machine.

there was a port of unix done to a stripped down tss/370 kernel (wasn't virtual machine interface was to higher level tss/370 kernel functions) done specifically for at&t.

some of the other 370 ports (gold/au, aix/370) etc ... tended to be deployed under vm370 ... not so much because of any lack in the straight-line 370 hardware support but because most shops were expecting normal vendor RAS support for their mainframes. the vendor RAS support was in large part based on extensive error recording and logging support. It was available w/vm370 and so guest operating systems could get by w/o having to also implement all the extensive hardware error recoding and logging (aka typical unix port could be mapped to the 370 hardware facilities ... but it would have been a much larger undertaking to add in all the RAS stuff ... than the straight-forward port had been).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Globus/GSI versus Kerberos

From: <lynn@garlic.com>
Newsgroups: comp.protocols.kerberos
Subject: Re: Globus/GSI versus Kerberos
Date: Thu, 07 Jul 2005 11:19:07 -0700
Ken Hornstein wrote:
When I cornered one of the Globus guys and asked him point-blank the same question, he told me that in his opinion the decision to do PKI was really driven politically from the top, and he thought Kerberos made a LOT more sense.

the original pk-init draft for kerberos specified certificate-less operation
https://www.garlic.com/~lynn/subpubkey.html#certless

you basically registered a public key with kerberos in lieu of a password and then used digital signature authentication with the onfile public key (no PKI and/or digital certificates required).
https://www.garlic.com/~lynn/subpubkey.html#kerberos

this was basically an authentication technology upgrade w/o having to introduce any new business processes and extraneous infrastructure operations.

it was later that certificate-based operation was added to the kerberos pk-init draft.

i gave a talk on this at the global grid forum #11
https://www.garlic.com/~lynn/index.html#presentation

at the meeting there was some debate on kerberos vis-a-vis radius as an authentication & authorization business process infrastructure.

note that in addition to their having been a non-PKI, certificate-less authentication upgrade for kerberos (using onfile public keys), there has been a similar proposal for RADIUS; basically registering public keys in lieu of passwords and performing digital signature authentication with the onfile public keys.
https://www.garlic.com/~lynn/subpubkey.html#radius

Straight forward upgrade of the authentication technology w/o having to layer on a separate cumbersome PKI business process.

Creating certs for others (without their private keys)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: mailing.openssl.users
Subject: Re: Creating certs for others (without their private keys)
Date: Thu, 07 Jul 2005 11:35:27 -0700
lynn@garlic.com wrote:
An attacker just sends some valid document in lieu of random data and is also able to produce any valid digital certificate for the associated public key that happens to have the non-repudiation bit set (whether or not the signing entity happened to include such a certificate for that particular operation or not). The attacker now has a valid document, a valid digital signature and a valid digital certificate with the non-repudiation bit set.

Somewhat because of the pure fantasy that being able to produce any valid digital certificate (for a valid public key that correctly validates the associated digital signature) with the non-repudiation bit set magically guarentess read, understood, agrees, approves, and/or authorizes .... the standards definition for the non-repudiation bit has since been significantly depreciated.


i.e.
https://www.garlic.com/~lynn/2005m.html#1

somewhat at issue is that the standard PKI protocol involves the originator digitally signing a message (with their private key) and then packaging the three pieces:
• message
• digital signature
• digital certificate


in the basic authentication scenarios ... the originator never even examines the contents of the message that is being signed (precluding any sense of human signature, i.e. read, understood, agrees, approves, and/or authorizes).

the other part of the problem (before non-repudiation bit was severely depreciated in PKI certificate definition), is there is no validation of what certificate that the originator actually appended.

even if the originator had appeneded a digital certificate w/o the non-repudiation bit set ... they had no later proof as to what certificate they had appended. all the attacker needs to do is being able to obtain from anyplace in the world a digital certificate for the same public key that happens to have the non-repudiation bit set.

in some of the pki-oriented payment protocols from the mid-90s ... there was even suggestions that if the relying party (or attacker, or say a merchant in an e-commerce scenario) could produce any digital certificate for the associated public key (effectively from any source) with the non-repudiation bit set ... then the burden of proof (in any dispute) would be shifted from the merchant to the consumer.

[newbie] Ancient version of Unix under vm/370

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [newbie] Ancient version of Unix under vm/370
Newsgroups: alt.folklore.computers
Date: 07 Jul 2005 18:33:26 -0600
Rich Alderson writes:
I wonder if the OP is looking for the Amdahl UTS port, which IIRC did run on top of VM. (Vague memory from a presentation to the systems staff at Chicago in 1982-1984 timeframe.)

why was it called "gold" before announcement ... hint, what is the atomic acronym for gold?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 08 Jul 2005 10:13:59 -0600
hancock4 writes:
In the telecom group we were talking about IBM's weak offerings in the field of mini-computers compared to PDP/DEC machines and others of the late 1960s and early 1970s.

It seems all IBM had to offer in that range was the System/3 and 1130, both of which were primarily batch/punch card machines, and the S/3 was business oriented. I believe the PDP/DEC, HP, Data General, Wang, etc. machines of that era were more real-time and terminal oriented which made them easier to use and more popular.

Could anyone share some observations as to how IBM got left out of that market? My only guess is that it was swamped getting S/360+S/370 finished and working and then to make enough of them to meet high demand. I also sense it saw itself as a business machines maker and not that interested in the science market, esp with small machines that would have a small markup.


there was the 1800 and system/7 in the instrumentation world.

prior to the ibm/pc, the instrumentation division did turn out a 68k-based machine.

in the early/mid-70s, peachtree become the s/1 and found wide deployment in instrumentation, control systems, as well as telecom world.

there was an effort from some sectors to try and get "peachtree" to be the core of the mainframe 3705 telecommunication unit (rather than some flavor of a UC ... universal controller microprocessor).

there was the joke about the (os/360) mft people from kingston moving to boca and trying to re-invent mft for the (16bit) s/1 (and called rps) ,,,, supposedly some of them went on to work on os/2.

the rps alternative was edx that had been done by some physicists at sjr (for lab instrumentation).

i don't have any ship numbers for these ... but as i've noted in the past with regard to time-sharing systems
https://www.garlic.com/~lynn/submain.html#timeshare

cp67 and vm370 saw much wider deployed numbers that many other time-sharing systems that possible show up widely in the academic literature. the possible conjecture is while cp67 & vm370 had much wider deployment than better known systems from the academic literature ... the cp67 & vm370 deployments tended to be dwarfed by the mainframe batch system deployment numbers. however, the claim is that vm370 on 4341 saw wider deployment than equivalent vax machines (it was just that the ibm press was dominated by the batch system activity).

misc. past s/1, peachtree, edx, etc posts
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000b.html#66 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#87 Motorola/Intel Wars
https://www.garlic.com/~lynn/2000c.html#43 Any Series/1 fans?
https://www.garlic.com/~lynn/2000c.html#51 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000.html#71 Mainframe operating systems
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2001n.html#52 9-track tapes (by the armful)
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#54 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002h.html#65 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#16 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2003b.html#5 Card Columns
https://www.garlic.com/~lynn/2003b.html#11 Card Columns
https://www.garlic.com/~lynn/2003b.html#16 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#76 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003e.html#4 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2004g.html#37 network history
https://www.garlic.com/~lynn/2004p.html#27 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005f.html#56 1401-S, 1470 "last gasp" computers?
https://www.garlic.com/~lynn/2005.html#17 Amusing acronym

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 08 Jul 2005 10:30:46 -0600
Mike Ross writes:
I could be wrong, but I thought the main CPU was a larger (360/65? Larger still?) box, with multiple 360/50s as front-ends...

from melinda's source ...
http://www.leeandmelindavarian.com/Melinda/
http://www.leeandmelindavarian.com/Melinda#VMHist

there was the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

story trying to get a spare 360/50 to modify for virtual memory ... but all the 360/50s were going to FAA and cambridge had to settle for 360/40 to modify for virtual memory.

quote from melinda's paper
https://www.garlic.com/~lynn/2002b.html#7 Microcode?

random past 9020 postings:
https://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2001e.html#13 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#15 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#17 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001i.html#14 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#15 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001k.html#65 SMP idea for the future
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#53 A request for historical information for a computer education project
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#32 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
https://www.garlic.com/~lynn/2002f.html#29 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#10 What is microcode?
https://www.garlic.com/~lynn/2002l.html#39 Moore law
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#30 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#7 Low-end processors (again)
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2003p.html#40 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004l.html#42 Acient FAA computers???
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005c.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cost: Teletype 33 vs. IBM Selectric Terminal (2741?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cost: Teletype 33 vs. IBM Selectric Terminal (2741?)
Newsgroups: alt.folklore.computers
Date: 08 Jul 2005 10:39:52 -0600
hancock4 writes:
The other was the IBM Selectric Terminal, IIRC the 2741. This ran faster at 15 chars/sec, produced a much nicer image, could be used on specialty machines like APL, and was a lot quieter. However, AFAIK it had no paper tape or other offline storage capability. I believe it could only be used on IBM systems. I believe it was more used as a specific business terminal--that is, for a specific application (ie a reservations desk) as opposed to general purpose work. It could also be used offline as a standard office typewriter, a nice function. I have no idea what it cost to rent or buy.

possibly the 2740 had paper tape option? ... the 1052 did have papertape option. 1052, 2740, and 2741 all used the selectric typeball mechanism. i have vaque memorys being told the 1052 was designed for heavier duty cycle than 2741.

quicky web search for 1052, 2740, 2741, ...
https://web.archive.org/web/20060325095540/http://www.yelavich.com/history/ev197001.htm
http://www.beagle-ears.com/lars/engineer/comphist/ibm_nos.htm
http://portal.acm.org/citation.cfm?id=356563

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Question about authentication protocols

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about authentication protocols
Newsgroups: sci.crypt
Date: 08 Jul 2005 12:25:30 -0600
Peter Seibel writes:
One page 54 of Schnier's _Applied Cryptography_ he presents a naive authentication protocol based on public key crpto. It goes like this:

(1) Host sends Alice a random string.

(2) Alice encrypts the string with her private key and sends it back to the host, along with her name.

(3) Host looks up Alice's public key in its database and decrypts the message using that public key.

(4) If the decrypted string matches what the host sent Alice in the first place, the host allows Alice access to the system.

He then points out that this protocol is problematic because Alice shouldn't be encrypting arbitrary strings with her private key lest she open herself up to various attacks which he describes in Section 19.3. He then gives an outline of more sophisticated protocols for proving identity that involve Alice performing various computations based on random numbers that she generates and her private key. (The actual protocols are described in Section 21.1.)

However in 19.3 when he discusses the attacks that are possible if Alice encrypts arbitrary strings with her private key he closes with this Moral: "Never use RSA to sign and random document presented to you by a stranger. Always use a one-way hash function first."

So why can't the naive protocol be fixed by simply following that advice and having Alice hash the random string and encrypt the hash. In other words the patched protocol (with changes in ALL CAPS) goes like this:

(1) Host sends Alice a random string.

(2) Alice HASHES THE STRING and encrypts the HASH with her private key and sends it back to the host, along with her name.

(3) Host looks up Alice's public key in its database and decrypts the message using that public key.

(4) If the decrypted string matches THE HASH OF what the host sent Alice in the first place, the host allows Alice access to the system.


i think the examples were to take the student through the various thot processes.

so standard digital signature definition is using private key to encode hash of message. the recipient then calculates the hash of the string, decodes the digital signature with the public key and compares the two hashs. if they are equal, the recipient assumes

1) message hasn't been changed in transit
2) something you have authentication (aka originator has access and use of the corresponding "private" key).

discussion of the digital signature standard:
http://csrc.nist.gov/cryptval/dss.htm

lots of posts on the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

and other posts on certificate-less public key operations
https://www.garlic.com/~lynn/subpubkey.html#certless

there can be an issue of the dual-use attack ... there has sometimes been some possibly semantic confusion with the term digital signature and human signature because they both contain the word signature.

human signature usually includes the connotation of read, undertstands, agrees, approves, and/or authorizes.

as in your description, the party, being authenticated, assumes that they are getting a random string and rarely, if ever, examines what is being digitally signed.

in various scenarios, there have been efforts to promote digital signatures to the status of human signatures. a dual-use attack on a private key used for both authentication infrastructures as well as such human signature operations ... is for the attacker to substitute a valid document in lieu of random bit string.

this was exaserbated in the pki/certificate standards world by the introduction of the non-repudiation bit as part of the certification standard. if a relying party could find any certificate, anyplace in the world (for the signer's public key) containing the non-repudiation bit ... then they could claim that the existance of that digital ceritificate (for the originator's public key containing the non-repudiation bit) was proof that the originator had read, understood, agrees, approves, and/or authorizes what had been digitally signed. in some of the PKI-related payment protocols from the 90s, this implied that if a relying-party could produce a digital certificate containing the signer's public key & the non-repudiation bit ... then in any dispute, it would shift the burden of proof from the relying party to the digitally signing party.

some recent posts on the subject
https://www.garlic.com/~lynn/2005l.html#18 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#29 Importing CA certificate to smartcard
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#36 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#5 Globus/GSI versus Kerberos
https://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 08 Jul 2005 16:54:02 -0600
Peter Flass writes:
IBM didn't (I believe) try to compete with the VAX, although it had several machines that could have. The RS-6000 machines seemd to be aimed at the engineering/scientific market, and the AS/400's at commercial. Now of course, the're the same CPU. Who now competes with RS-6000? Sun, HP, anyone else?

4341 w/vm was in the same market segment and time-frame as vax ... and i believe 4341/vm had bigger install base ... although as posted before ... there were several corporate issues that could have been interpreted as resulting in lost sales to vax.

part of this was that endicott's 4341 also was very strong competitor to pok's 3031 ... and as such there was some internal corporate political maneuvering.

rs6000 was much later.

risc/801/romp was going to be a display writer follow-on in the early 80s by the office products division. it was going to be CPr based with lots of implementation in pl.8. when that was cancelted, it was decided to quickly retarget the platform to the unix workstation market. somewhat to conserve skills .... a pl.8-based project was put together called the virtual resource manager .... that sort of provided a high-level abstract virtual machine interface (and implemented in pl.8). Then the vendor that had done the at&t port to ibm/pc for pc/ix was hired to do a similar port to the vrm interface. this became pc/rt and aix.

follow-on to pcrt/romp was rs6000/rios/power, the vrm was mostly eliminated and aixv3 was built to the rios chip interface. there is paper weight on my desk that has six chips with the legend: POWER architecture, 150 million OPS, 60 millon FLOPS, 7 million transistors. misc. 801/romp/rios postings
https://www.garlic.com/~lynn/subtopic.html#801

the executive that we reported to while we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

sample, specific post
https://www.garlic.com/~lynn/95.html#13

left to head-up somerset ... the joint ibm, motorola, apple, et al effort for power/pc. differences between rios/power and power/pc .... rios/power was designed for flat out single processor thruput ... tending to multi-chip implementation and no provisions for cache coherency and/or multiprocessor support. the power/pc was going to single-chip implementation with support for multiprocessor cache coherency. it was the power/pc line of chips that show up in apple, as/400, and some flavors of rs/6000 ... while other flavors of rs/6000 were pure rios/power implementation (including the original rs/6000).

i have some old memories of arguments going on between austin and rochester over doing power/pc 65th bit implementation. unix/apple world just needed 64bit addressing. rochester/as400 was looking for a 65th bit tag line ... helping supporting their memory architecture.

post posting with some vax (us & worldwide) ship numbers
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

part of the issue was that the price/performance during vax & 4341 time-frame seemed to have broken some threshhold. You started to see customer orders for 4341s that were in the hundreds ... quite a few that were single orders for large hundreds of 4341s. this really didn't carry-over to the 4381 (4341 follow-on), since by that time, you started to see that market being taken over by large PCs and workstations.

specific post:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

other past posts regarding the departmental computing/server market
https://www.garlic.com/~lynn/94.html#6 link indexes first
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#0 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002c.html#27 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#2 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#61 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#74 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#48 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002i.html#29 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#4 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#7 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#34 ...killer PC's
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003c.html#14 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#71 Tubes in IBM 1620?
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#61 Another light on the map going out
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#46 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003.html#10 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#60 Big Ideas, where are they now?
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#13 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#24 Tools -vs- Utility
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004g.html#23 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#49 Secure design
https://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#35 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005m.html#9 IBM's mini computers--lack thereof

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 09 Jul 2005 09:46:13 -0600
Anne & Lynn Wheeler writes:
left to head-up somerset ... the joint ibm, motorola, apple, et al effort for power/pc. differences between rios/power and power/pc .... rios/power was designed for flat out single processor thruput ... tending to multi-chip implementation and no provisions for cache coherency and/or multiprocessor support. the power/pc was going to single-chip implementation with support for multiprocessor cache coherency. it was the power/pc line of chips that show up in apple, as/400, and some flavors of rs/6000 ... while other flavors of rs/6000 were pure rios/power implementation (including the original rs/6000).

oh, almost forgot, and in this time frame, wang and others were convinced to convert to the processor also. bull also rebranded the 6000.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 10 Jul 2005 14:36:19 -0600
"David Wade" writes:
No one has mentioned the Series/1, wasn't this IBM's "mini" after the 1130 was dropped. Trouble is the Series/1 was more a cut down mainframe that a fully fledged mini...

various recent posts mentioning series/1 &/or peachtree
https://www.garlic.com/~lynn/2005d.html#1 Self restarting property of RTOS-How it works?
https://www.garlic.com/~lynn/2005f.html#34 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#56 1401-S, 1470 "last gasp" computers?
https://www.garlic.com/~lynn/2005h.html#5 Single System Image questions
https://www.garlic.com/~lynn/2005.html#17 Amusing acronym
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof

there were some people that attempted to get peachtree for the core that became the 3705 mainframe telecommunication controller.

i had some interest in this area ... having worked on a clone mainframe telecommunication controller as an undergraduate (and some write-up blaming the project for starting clone controller business)
https://www.garlic.com/~lynn/submain.html#360pcm

and then later tried to expand and productize a s/1-based implementation
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Course 2821; how this will help for CISSP exam ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.certification.networking
Subject: Re: Course 2821; how this will help for CISSP exam ?
Date: Tue, 12 Jul 2005 19:28:39 -0700
davidxq wrote:
i think this course can teach you the basic theory of PKI, but it cannot overlap the CISSP related topic.

the technology is asymmetric key cryptography ... basically what one of a keypair can encode, the other key decodes (in contrast to symmetric key cryptography where the same key is used for encoding and decoding).

there is a business process called public key .... where one key, of a keypair is identified as public and freely distributed, the other of the keypair is identified as private, kept confidential and never divulged.

there is a business process defined digital signature .... where the origin calculates the hash of a message/document and then encodes the hash with the private key ... and transmits the message/document with the appended digital signature. the recipient recalculates the hash of the message/document, decodes the digital signature with the public key and compares the two hashes. if they are equal, then it is assumed:

1) the message/document hasn't changed since the digital signature 2) something you have authentication, aka the originator has access and use of the corresponding private key.

This can have tremendous advantage over shared-secrets likes pins/passwords and/or other something you know static data authentication.

In the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


the typical shared-secret
https://www.garlic.com/~lynn/subintegrity.html#secret

has short-coming that it can both originate and authenticate an operation ... i.e. entities with access to the authentication information can also use it to impersonate. partially as a result, the standard security recommendations is that a unique shared-secret is required for every security domain (so individuals in one security domain can't take the information and impersonate you in a different security domain). There is also the threat/vulnerability of evesdropping on the entry of the shared-secret information for impersonation and fraud.

It is possible to substitute the registration of public keys in lieu of shared-secrets. Public keys have the advantage that they can only be used to authenticate, they can't be used to impersonate. Also, evesdropping on digital signatures doesn't provide much benefit since the it is the private key (that is never divulged) that is used to originate the authentication information.
https://www.garlic.com/~lynn/subpubkey.html#certless
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

Basically, you build up a repository of trusted public keys of entities that you have dealings with for authentication purposes.

There has something been called PKI, certification authorities, and digital certificates designed to meet the offline email paradigm of the early 80s (somewhat anlogous the "letters of credit" from the sailing ship days). The reipient dials their local (electronic) postoffice, exchanges email and then hangs up. They now may be faced with first-time communication with total stranger and they have no local and/or online capability of determining any information regarding the stranger.

In this first time communication with total stranger, the trusted public key repository has been extended. There are certification authories that certify information and create digitally signed digital certificates containing an entities public key and some information. The recipient now gets an email, the digital signature of the email, and a digital certificate. They have preloaded their trusted public key repository with some number of public keys belonging to certification authorities. Rather than directly validated a sender's digital signature, they validate the certification authorities digital signature (using a public key from their local trusted public key repository) on the digital certificate. If that digital signature validates, then they use the public key from the digital certificate to validate the actual digital signature on the message.

In the early 90s, there were these things, x.509 identity certificates that were starting to be overloaded with personal information (the idea being that a recipient find at least one piece of personal information useful when first time communication with a total stranger is involved, and therefor the certificate serving a useful purpose). The business model was sort of do away with all established business relationships and substitute spontaneous interaction with total strangers. For instance, rather than depositoring large sums of money in a financial institution with which you have an established account ... you pick out a total stranger with which to give large sumes of money. The exchange of x.509 identity certificates would be sufficient to provide safety and security for your money. This also had the characteristic that all transactions (even the most simplest of authentication operations) were being turned into heavy duty identification operations.

In the mid-90s, some institutions were coming to the realization that x.509 identity certificates, overloaded with excessive personal information, represented significant liability and privacy issues. As a result, you saw some financial institutions retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

which basically contained a public key and some form of database index where the actual information was stored. However, it is trivial to demonstrate that such relying-party-only certificates are redundant and superfluous:

1) first they violate the design point for certificates ... providing information that can be used in first time communication with total stranger

2) if the relying party already has all the information about an entity, then they have no need for a stale, static digital certificate that contains even less information.

This was exasherbated in the mid-90s by trying to apply stale, static, redundant and superfluous relying-party-only digital certificates to payment protocols. The typical iso8583 payment message is on the order of 60-80 bytes. The PKI overhead of even relying-party-only stale, static, redundant and superfluous digital certificates were on the order of 4k-12k bytes. The stale, static, redundant and superfluous digital certificate attached to every payment message would have represented a payload bloat of one hundred times.

CPU time and system load

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CPU time and system load
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: 13 Jul 2005 10:06:09 -0600
mike.meiring@ibm-main.lst (Mike Meiring) writes:
Should be about the same , but in some cases PR/SM (Lpar) overhead can cause increase in TCB times. On a lightly loaded system the capture ratio would typically drop, meaning that operating system is incurring relatively more 'overhead' ('Low utilization effects' (LUE) coming into play).

in the pr/sm, lpar, etc ... it tends to be the ratio of instructions that involve overhead ... to all instructions. in low utilization environment, a large part of the execution tends to be in the kernel, which has a higher ratio of instructions w/overhead. A BR15 loop could be considered notorious for instruction that has no hypervisor overhead.

the univ. that i was at ... got cp67 installed the last week in jan. 68 ... and i got to attend the march '68 share meeting in houston for the cp67 announcement. then at the fall '68 share meeting in Atlantic City ... i got to present a talk on mft14 enhancements as well as cp67 enhancements.

the workload at the univ. was lots of short jobs (before watfor) and was primarily job scheduler. i had done a lot of i/o tuning on mft14 to make thruput of this job mix nearly three times faster (12.7secs per 3-step job vis-a-vis over 30 seconds elapsed time for out-of-the-box mft14) essentially the same number of instructions executing in close to 1/3rd time ... because of drastically optimized disk and i/o performance).

I had also rewritten a lot of the cp67 kernel between jan. and the Atlantic City share meeting to drastically reduce hypervisor overhead for high overhead instructions/operations.

In the Share talk, i have the ratio of elapsed time w/hypervisor to elasped w/o hypervisor for mft14 job stream. Using these statistics, a normal, out-of-the-box mft14 looked much better running in a hypervisor environment. improving basic elapsed time by a factor of nearly 3 times by optimizing i/o made the hypervisor ratio much worse. Basically there was no increase in hypervisor overhead time for i/o wait. Drastically cutting i/o wait made the hypervisor overhead time ratio much worse (amount of overhead stayed the same but occurred in much shorter elapsed time).

The obtimized MFT14 jobstream ran in 322sec elapsed time on bare machine and in 856sec elapsed time under unmodified cp67 (534secs of cp67 hypervisor cpu overhead). With a little bit of work part time (I was still undergraduate and also responsible for the MFT14 production system), I got this reduced to 435secs elapsed time (113secs of cp67 hypervisor cpu overhead vis-a-vis the original 534 seconds of cp67 cpu overhead).

part of talk from Atlantic City '68 share presentation
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

a couple overheadsfrom above:


OS Performance Studies With CP/67

OS           MFT 14, OS nucleus with 100 entry trace table, 105 record
             in-core job queue, default IBM in-core modules, nucleus total
size 82k, job scheduler 100k.

HASP         118k Hasp with 1/3 2314 track buffering

Job Stream   25 FORTG compiles

Bare machine Time to run: 322 sec. (12.9 sec/job)
times     Time to run just JCL for above: 292 sec. (11.7 sec/job)

Orig. CP/67  Time to run: 856 sec. (34.2 sec/job)
times     Time to run just JCL for above: 787 sec. (31.5 sec/job)

Ratio   CP/67 to bare machine

2.65    Run FORTG compiles
             2.7     to run just JCL
2.2     Total time less JCL time

.... footnote for above overhead
1 user, OS on with all of core available less CP/67 program.

Note: No jobs run with the original CP/67 had ratio times higher than the job scheduler. For example, the same 25 jobs were run under WATFOR, where they were compiled and executed. Bare machine time was 20 secs., CP/67 time was 44 sec. or a ratio of 2.2. Subtracting 11.7 sec. for bare machine time and 31.5 for CP/67 time, a ratio for WATFOR less job scheduler time was 1.5.

I hand built the OS MFT system with careful ordering of cards in the stage-two sysgen to optimize placement of data sets, and members in SYS1.LINKLIB and SYS1.SVCLIB.


.... summary of some of the CP/67 optimization work during the spring and summer of '68

MODIFIED CP/67

OS run with one other user. The other user was not active, was just
available to control amount of core used by OS. The following table
gives core available to OS, execution time and execution time ratio
for the 25 FORTG compiles.

CORE (pages)    OS with Hasp            OS w/o HASP

104             1.35 (435 sec)
94             1.37 (445 sec)
 74             1.38 (450 sec)          1.49 (480 sec)
64             1.89 (610 sec)          1.49 (480 sec)
 54             2.32 (750 sec)          1.81 (585 sec)
44             4.53 (1450 sec)         1.96 (630 sec)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Another - Another One Bites the Dust

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Another - Another One Bites the Dust
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: 13 Jul 2005 10:18:40 -0600
wball@ibm-main.lst (William Ball) writes:
You're making a leap there that -I'm- certainly not ready to make. Speed isn't everything, never has been.

If you want to play games or have application that you don't really care about when it gets done and a lot of times in accurate results and only has 1 or 2 users, you -might- be able to live with putting it on a Unix platform.

However, IMHO, the RAS -really- stinks.


long ago and far away .... the dasd engineering lab (bldg 14) and dasd product test lab (bldg 15) had these "testcells" where tested stuff under development. that had some number of processes that were scheduled for stand-alone time with a testcell (they had several 2914 channel switches for connecting a specific testcell to a specific processor).
https://www.garlic.com/~lynn/subtopic.html#disk

they had tried running a processor with MVS and a single testcell ... but at the time, MVS had a 15min mean-time-between-failure trying to run a single testcell.

I undertook to rewrite IOS (making it bullet proof) so that 6-12 testcells could be operated concurrently in an operating system environment. I then wrote an internal corporate only report about the effort ... unfortunately I happened to mention the base MVS case of 15min MTBF ... and the POK RAS guys attempted to really bust my chops over the mention.

That was not too long after my wife served her stint in POK in charge of loosely-coupled architecture ... while there she had come up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

but was meeting with little success in seeing it adopted (until much later in sysplex time-frame ... except for some work by the IMS hot-standby people).

somewhat based on experience ... we started the ha/cmp project in the late '80s
https://www.garlic.com/~lynn/subtopic.html#hacmp
one specific mention
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

S/MIME Certificates from External CA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.biztalk.general,microsoft.public.windows.server.security
Subject: Re: S/MIME Certificates from External CA
Date: Wed, 13 Jul 2005 18:18:57 -0700
Jeff Lynch wrote:
Forgive my ignorance of digital certificates. So far I've always used HTTPS/SSL server-side certificates but not (S/MIME - PKI) in BTS2004 running on Win2K3. I know that BizTalk Server 2004 can use a digital certificate for signing and encryption (S/MIME) of outbound documents from a Windows Server acting as a CA but can it use a certificate from an external CA such as VeriSign? If so, how is the certificate "requested" from the external CA and what type of certificate is required?

the technology is asymmetric cryptography ... what one key (of a keypair) encodes, the other key (of the keypair) decodes. This is in contrast to symmetric key cryptography where the same key is used to both encode and decocde.

there is a business process called public key ... where one key (of a keypair) is labeled public and is freely distributed. the other key (of the keypair) is labeled private and is kept confidential and never divulged.

there is a business process called digital signatures for doing something you have authentication; basically the hash of a message/document is computed and encode with the proviate key. the message/document and the digital signature are transmitted. the recipient recalculates the hash on the message/document and decodes the digital signature with the public key and compares the two hashes. if the two hashes are the same, then the recipient assumes

1) the message/document has not changed since being signed 2) something you have authentication; the originator has access and use of the corresponding private key.

public keys can be registered in lieu of pins, passwords, shared-secrets, etc as part of authentication protocols ... aka
https://www.garlic.com/~lynn/subpubkey.html#certless
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

PKIs, certification authorities, and digital certificates were created to address the offline email type scenario from the early 80 (and somewhat analogous to the "letters of credit" paradigm from the sailing ship days). The recipient dials their local (electronic) post office, exchanges email, and hangs up. at this point they may be faced with first time communication from a total stranger and they have no local &/or other means of establishing any information about the email sender.

The infrastructure adds the public keys of trusted certification authorities to the recipients repository of trusted public keys. Individuals supply their public key and other information to a certification authority and get back a digital certificate that includes the public key, the supplied inoformation and digitally signed by the certification authority. Now, when sending first time email to a total stranger, the originator digitally signs the email and transmits the 1) email, 2) the digital signature, and the 3) digital certificate. The recipient validates the certification authorities digital signature (on the digital certificate) using the corresponding public key from their repository of trusted public keys. If the digital certificate appears to be valid, then they validate the digital signature on the mail using the public key from the digital certificate. They now can interpret the email using what every useful certified information from the digital certificate.

Basically, one might consider an "external certification authority" ... as an authority that you haven't yet loaded their public key into your repository of trusted public keys.

SSL/HTTPS domain name server certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

were designed for

1) encrypted channel as countermeasure to evesdropping at any point in the communication 2) is the webserver you are talking to really the webserver you think you are talking to

Part of this was because of perceived integrity weaknesses in the domain name infrastructure. A webserver address is typed into the browser. the webserver sends back a digital certificate .... that has been signed by a certification authority whos public key has been preloaded into the browsers repository of trusted public keys. The browser validates the digital signature on the digital certificate. They then compare the domain name in the digital certificate with the typed in domain name. Supposedly if they are the same ... you may be talking to the webserver you think you are talking about

The browser now can generate a random session key and encode it with the server's public key (from the digital certificate) and send it back to the server. If this is the correct server, then the server will have the corresponding private key and can decode the encrypted random session key from the browser. From then on, the server and the browser can exchange encrypted information using the random session key (if it isn't the correct server, the server won't have access to the correct private key and won't be able to decode the random session key sent by the browser).

when this was originally be worked out
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

the objective was that the URL supplied by the end-user started out using HTTPS. In the e-commerce world ... some number of servers found that using HTTPS cut thruput by something like 80-90percent compared to plain HTTP. So you started seeing ecommerce sites using simple HTTP for all of the shopping experience and saving HTTPS when the user went to hit the pay/checkout button. The problem is that if the user was at a spoofed site during the HTTP portion ... than any fraudulent site would likely supply a URL with the pay/checkout button that corresponded to a URL in some valid digital certificate that they had (defeating the objective of making sure the server you thot you were talking to was actually the server you were talking to).

there is something of a catch-22 in this. A lot of certification authorities are purely in the business of checking on the validity of the information they are certifying ... they aren't actually the authoritative agency for the actual information. In the SSL domain name certificate scenario, the certification authorities ask from some amount of identiification information from the certificate applicant. They then contact the authoritative agency for domain name ownership and cross-check the applicant's supplied identification information with the identification information on file with the domain name infrastructure as to the domain name ownership. Note however, this domain name infrastructure which is the trust-root for things related to domain names ... is the same domain name infrastructure which is believe to have integrity issues that give rise to requirement for SSL domain name certificates.

So a proposal, somewhat supported by the SSL domain name certification authority industry ... is that domain name owners register their public key with the domain name infrastructure. Then all future communication with the domain name infrastructure is digitally signed ... which the domain name infrastructure can validate with the on-file public key (note: a certificate-less operation). This communication validation is thought to help eliminate some integrity issues.

For the certification authority industry, they now can also request that SSL domain name certificate applications also be digital signed. They now can change from an expensive, error-prone, and complex identification process to a much simple and cheaper authentication process (by retrieving the onfile public key from the domain name infrastructure and validating the digital signature).

The catch-22s are 1) improving the integrity of the trust-root for domain name ownership also lowers the requirement for SSL domain name certificates because of concerns about domain name infrastructure integrity and 2) if the certification authority industry can retrieve onfile public keys from the domain name infrastructure to validate who they are communicating with ... it is possible that the rest of the world could also ... eliminating any need for having SSL domain name server certificates.

One could imagine a simplified and optimized SSL protocol, where the client retrieves the ip-address and the associated public key from the domain name infrastructure in a single, existing exchange. They could then piggyback the randomly generated session key encoded with the servers public key on the initial contact with the server.

Another issue was some trend in the early 90s to overload the x.509 identity certificates with large amounts of personal information ... in hopes that future "strangers" (relying parties) would find something useful/meaningful when receiving first time communication (since CAs couldn't necessarily predict the context that future relying parties might be interested in).

In the mid-90s, there started to be some realization that x.509 identity certificates, grossly overloaded with personal information represented significant privacy and liability issues. As a result, you found some institutions retrenching to relying-party-only certificates ... basically a public key and some sort of database lookup index (where all the real information about an individual was stored). However, it was trivial to show that such relying-party-only certificates were redundant and superfluous ... aka 1) they violated the premise of supplying information for first-time communication between strangers and 2) if the relying party (recipient) already had a superset of the information found in a digital certificate (including the originator's public key) ... then it was redundant and superfluous for the originator to be constantly sending a copy of the certificate back to the relying party on every communication.

The other issue was that there were attempts to try and have x.509 identity certificates attached to all digitally signed documents and messages. This basically resulted in causing a large amount of confusion about the differences between authentication and identification ... and would have effectively turned all electronic operations ... even the most trivial authentication operations .... into heavyweight identification operations.

Implementation of boolean types

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Implementation of boolean types.
Newsgroups: comp.databases.theory
Date: 14 Jul 2005 07:38:41 -0600
Jonathan Leffler writes:
SQL-1999 and SQL-2003 both have a BOOLEAN type, with recognized values TRUE, FALSE and UNKNOWN. I'm not clear whether NULL is equivalent to UNKNOWN or not; I've not scrutinized the weasel-words carefully enough.

old 3-value logic posting
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004l.html#75 NULL

reference to Dec. '92, article by CJ Date

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

simd for 390(or z990)?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: simd for 390(or z990)?
Newsgroups: comp.arch
Date: 15 Jul 2005 12:56:50 -0600
"Michel Hack" writes:
Well, that's the point: *was*. The VF died with the bipolar boxes (3090 was the last one, in 1992). It is now officially dead, in that the opcode space has been re-used for new 64-bit instructions in z/Series machines.

i heard somebody once trying to make the case that 3090 VF was purely a marketing offering ... that 3090 scalar had been so optimized that it could drive memory at nearly saturation ... and therefor there was little additional thruput gained from using VF.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old Computers and Moisture don't mix - fairly OT

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old Computers and Moisture don't mix - fairly  OT
Newsgroups: alt.folklore.computers
Date: 16 Jul 2005 09:20:34 -0600
forbin@dev.nul (Colonel Forbin) writes:
What I meant by the raised floor was a raised computer room floor, which is installed over the subfloor. If the problem isn't too bad, the airflow might be sufficient to keep things liveable, but it's a kludge in a situation like this, and corrosion under the raised floor, including its superstructure, will remain a problem. Hence, this would be a temporary solution if it was intended to relocate to a more suitable building.

santa teresa labs ... coyote valley ... south san jose ... was built in large meadow at base of some hills. the datacenter is sunk in the middle of complex of towers. it turns out that one of the things that made the meadow, was run-off from the hills (especially) during the rainy season. for the first year or so ... the datacenter had flooding problems.

... topic drift ... santa teresa labs was originally going to be called coyote labs ... using a convention of naming after the nearest post office. the week before coyote labs was to open (I think the Smithsonian air&space museum and coyote labs were opening the same week), i happened to be in DC. That week, there was some demonstrations on the capitol steps (that made the national press) by an organisation of working ladies from san francisco ... which is believed to have led to the decision to quickly change the name of the lab from coyote to santa teresa (closest cross street).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old Computers and Moisture don't mix - fairly OT

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old Computers and Moisture don't mix - fairly  OT
Newsgroups: alt.folklore.computers
Date: 16 Jul 2005 09:47:59 -0600
Anne & Lynn Wheeler writes:
santa teresa labs ... coyote valley ... south san jose ... was built in large meadow at base of some hills. the datacenter is sunk in the middle of complex of towers. it turns out that one of the things that made the meadow, was run-off from the hills (especially) during the rainy season. for the first year or so ... the datacenter had flooding problems.

two of the biggest datacenters that i've been in ... boeing renton in the late 60s ... when there were nearly constantly flow of 360/65s staged in the halls waiting for installation ... and POK in the early 70s. however, there have been rumors that the datacenter described in chapter nineteen of "Boyd, The fighter pilot who changed the art of war" (by Robert Coram; Little, Brown) might have been larger (it mentions $2.5b windfall for IBM).

misc. past boyd refs:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old Computers and Moisture don't mix - fairly OT

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old Computers and Moisture don't mix - fairly  OT
Newsgroups: alt.folklore.computers
Date: 16 Jul 2005 10:03:09 -0600
Anne & Lynn Wheeler writes:
two of the biggest datacenters that i've been in ... boeing renton in the late 60s ... when there were nearly constantly flow of 360/65s staged in the halls waiting for installation ... and POK in the early 70s. however, there have been rumors that the datacenter described in chapter nineteen of "Boyd, The fighter pilot who changed the art of war" (by Robert Coram; Little, Brown) might have been larger (it mentions $2.5b windfall for IBM).

misc. past boyd refs:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2


maybe the $2.5b was used to help offset the amount spent on FS ... which was canceled w/o ever being announced
https://www.garlic.com/~lynn/submain.html#futuresys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old Computers and Moisture don't mix - fairly OT

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old Computers and Moisture don't mix - fairly  OT
Newsgroups: alt.folklore.computers
Date: 16 Jul 2005 11:20:09 -0600
Anne & Lynn Wheeler writes:
two of the biggest datacenters that i've been in ... boeing renton in the late 60s ... when there were nearly constantly flow of 360/65s staged in the halls waiting for installation ... and POK in the early 70s. however, there have been rumors that the datacenter described in chapter nineteen of "Boyd, The fighter pilot who changed the art of war" (by Robert Coram; Little, Brown) might have been larger (it mentions $2.5b windfall for IBM).

misc. past boyd refs:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2


I can imagine the place also having water problems ... large underground bunker, extremely high humidity environment and rainy season that could be pretty extreme.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: 17 Jul 2005 08:05:10 -0600
"Rupert Pigott" writes:
I saw a few VAXen running 30+ spindles fairly well. They seemed to get the job done OK. Worked on a couple of Alpha servers that ran 100+ spindles, they were waiting on the drives, the network or the users most of the time. DEC got there in the end - only to be HPAQ'd :S

circa 1980 ... would periodically visit an installation in the bay area and had a pair of loosely-coupled 370/158 with 300+ 3330 drives ... not particularly large operation from the processor standpoint (158 originally announce 8/72 and first ship 4/73).

some lincpack numbers
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033

some past threads mentioning 158, 4341, 3031 rain/rain4 comparison
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001e.html#9 MIP rating on old S/370s
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

3031 was announced 10/77 and first shipped 3/78.

basically 3031 and 158 used the same processor engine. in the 158 ... the engine was shared between running the 370 microcode and the integrated channel microcode. for the 303x-line of computers, a "channel director" was added ... bascially a dedicated 158 processor engine with just the integrated channel microcode.

a single processor 3031 configuration was then really two 158 processor engines sharing the same memory ... one processor engine dedicated to running the 370 microcode and one processor engine dedicated to running the integrated channel microcode.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Mon, 18 Jul 2005 07:46:12 -0600
Nick Maclaren wrote:
Exactly where the optimum point lay (or lies) between extreme RISC and (say) the VAX has varied in time, but has never been clear. However, I have never known a time when the VAX level of complexity was close to optimal (rather than too complex). The 68000 or even the 68020, yes, but no further.

FS in the early 70s went over to be super complex (but it was canceled w/o every being announced ... although possibly billions were spent on it before it was canceled)
https://www.garlic.com/~lynn/submain.html#futuresys

I've periodically commented that 801/RISC was large part re-action to future system failure ... swinging in the exact opposite direction. There were periodic comments in the mid-70s about 801/RISC consistently trading off software (& compiler) complexity for hardware simplicity.
https://www.garlic.com/~lynn/subtopic.html#801

how do i encrypt outgoing email

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: microsoft.public.outlook.installation
Subject: Re: how do i encrypt outgoing email
Date: Mon, 18 Jul 2005 10:26:16 -0700
Perry wrote:
I am looking for a way to encrypt out going email messages in Outlook. We are not using exchange server

the standard technology is asymmetrical key cryptography ... what one key encodes, the other key decodes (as opposed to symmetrical key cryptography where the same key both encrypts and decrypts)

there is a business process, public key ... where one of the keypair is designated "public" and freely distributed; the other of the keypair is designated private, kept confidential and is *never* divulged.

the standard method of sending encrypted email is to obtain the recipient's public key .... this can be done in a number of ways and most infrastructures provide ways of either dynamically obtaining the recipient's key ... and having it already stored in your local trusted public key repository.

the simple mechanism is to encode the data with the recipient's public key and then only the recipient's private key is able to decode it.

because of asymmetrical cryptography performance issues ... many implementations will generate a random symmetric key, encrypt the data with the symmetric key and then encode the symmetric key ... and transmit both the encrypted data and the encoded key. only the recipient's private key can decode and recover the symmetric key ... and only by recovering the symmetric key can the body of the message be decrypted.

for somebody to send you encrypted mail ... you will need to have generated a public/private key pair and transmitted your public key to the other party. for you to send another party encrypted mail ... they will have needed to have generated a public/private key pair ... and you will need to have obtained their public key.

PGP/GPG have individuals exchanging keys directly and storing them in their local trusted public key storage. PGP/GPG infrastructure also support real-time, online public key registry.

there is a business process, digital signatures. here the hash of the message is computed and encoded with the private key ... the messaage and the digital signature is transmitted. the recipient recomputes the hash of the message, decodes the digital signature (resulting in the original hash) and compares the two hash values. if they are the same, then the recipient can assume:

1) the message hasn't been altered since signing
2) something you have authentication ... aka the signer has access to and use of the corresponding private key

There is also a PKI, certificate-based infrastructure that is targeted at the offline email environment from the early 80s. Somebody dials their local (electronic) post office, exchanges email, hangs up and is now possibly faced with first time communication. This is somewhat the letters of credit environment from the old offline sailing ship days where the recipient had no provisions for authenticating first time communication with complete strangers

An infrastructure is defined where people load up their trusted public key repositories with public keys belonging to *certification authorities*. When somebody has generated a public/private key pair ... they go to a certification authority and register the public key and other information. The certification authority generates a digital certificate contain the applicants public key and other information which is digitally signed by the certification authorities private key (public can verify the digital signature using the certification authorities public key from their trusted public key repository). This provides a recipient a way of determining some information about a stranger in first time communication ... aka the stranger has digital signed a message and transmitted the combination of the message, their digital signature and their digital certificate. The recipient 1) verifies the certification authorities digital signature on the digital certificate, 2) takes the public key from the digital certificate and verifies the digital signature on the message, 3) uses the other information in the digital certificate in determining basic information about the total stranger first time communication).

You can push a message and your digital signature to a stranger (possibly along with your digital certificate) ... but you can't actually encrypt the message for the stranger ... w/o first obtaining their public key.

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jul 2005 09:17:22 -0600
Charles Richmond writes:
Yeah, when you have an 8 megabyte buffer inside on the drive electronics, it *can* speed things along a bit. And most mainframes in the late 1970's did *not* even have 8 meg of main memory. Of course, if the 8 meg buffer had to made of *cores*, then the drive would need to be a little bit bigger. ;-)

in the early 80s, you got 8mbyte buffers in 3880 controller, 3880-11/ironwood was 4k record cache and 3880-13/sheriff was full track cache. a ironwood/sheriff controller might front 4-8 3380 drives ... at 630mbytes/drive ... that works out to 8mbyte cache for 2-5gbytes of data.

sheriff has some early marketing material that claimed 90 percent hit rate reading records. the case was seequentially reading a file that was formated ten 4k records to 3380 track; a read to first record on the track was a miss ... but it brought the full track into the cache ... so the next nine reads were "hits". you could achieve similar efficiency by changing the DD statement to do full-track buffering ... in which case the controller cache dropped to zero percent hit rate (the full track read would miss ... and then it would all be in the processor memory).

ironwood was oriented towards paging cache ... but the typical processor was 16mbyte to 32mbytes of real storage. a 4k page read brought it first into the ironwood cache and then into the processor memory. since a paging environment is effectively a form of caching and both the ironwood and the processor was using LRU to manage replacement ... they tended to follow similar replacement patterns. since real storage tended to be larger effective cache than the controller ... pages tended to age out of the controller cache before they aged out of processor's memory.

in the mid-70s ... i had done a dup/no-dup algorithm for fixed-head paging device (which tended to also have relative small limited size). in the "dup" case ... when there was relatively low pressure on fixed-head paging device ... a page that was brought into processor memory also remained allocated on the paging device (aka "duplicate", if the page was later replaced in real memory and hadn't been modified, then a page-write operation could be avoided since the copy on the paging device was still good). As contention for fixed-head paging device went up, the algorithm would change to "no-dup" ... i.e. when page was brought into real storage ... it was de-allocated from the fixed-head device (aka "no-duplicate" ... this increased the effective space on high-speed fixed-head paging devices ... but required that every page replacement require a page-write (whether the page had been modified or not).

So adapting this to ironwood ... all page reads were "distructive" (special bit in the i/o command). "distrructive" reads that were in the controller cache ... would de-allocate from the controller cache after the read ... and wouldn't allocate in the cache on a read from disk. The only way that pages get into the cache is when they are being written from processor storaage (presumably as part of a page replacement strategy ... aka they no longer exist in processor storage). A "dup" strategy in a configuration with 32mbytes of processor storage and four ironwoods (32mbytes of controller cache) ... would result in total electronic caching of 32mbytes in processor storage (since most of the pages in ironwood would effectively be duplicated in real storage). A "no-dup" strategy with the same configuration could result in total electronic caching of 64mbytes (32mbytes in processor storage and 32mbytes in ironwood).
past postings attached below

about the time of ironwood/sheriff ... we also did a project at sjr to instrument operating systems for tracing disk activity. this was targeted at being super-optimized so that it could continuously run as part of standard production operation. tracing was installed on a number of internal corporate machines in the bay area ... spanning a range of commercial dataprocessing to engineering and scientific.

a simulator was built for the trace information. the simulator found that (except for a couple edge cases), the most effective place for electronic cache was at the system level; aka given fixed amount of electronic cache ... it was most effective as a global system cache rather than partitioned into pieces at the channel, controller, or disk drive level.
this corresponds to my findings as an undergraduate in the
60s that global LRU out performed local LRU.

One of the edge cases involved using electronic memory on a drive for doing some rotational latency compensation (not directly for caching per se); basically data would start transfering to cache as soon as the head was able ... regardless of the position on the track.

the other thing that we started to identity was macro data usage ... as opposed to micro data pattern usage. A lot of data was used in somewhat bursty patterns ... and during the burst there might be collection of data from possibly multiple files being used. At the macro level ... you could do things for improvements by load-balancing (the different data aggregates that tended to be used in a common burst) across multiple drives. The analogy for single drive operation is attempting to cluster data that tended to be used together.

some of the disk activity clustering work was similar to some early stuff at the science center in the early 70s ... taking detailed page traces of application and feeding it into a program optimization application (that was eventually released as a product called "VS/Repack"). VS/Repack would attempt to re-organize a program for minimum real-storage footprint (attempting to cluster instructions and data used together in minimum number of virtual pages). see past postings on vs/repack attached at bottom of this posting.

past postings on dup/no-dup, ironwood, sheriff:
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003i.html#72 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns

past postings on global/local LRU replacement:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#0a Cache
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/96.html#10 Caches, (Random and LRU strategies)
https://www.garlic.com/~lynn/98.html#54 qn on virtual page replacement
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2003f.html#55 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#55 Advantages of multiple cores on single chip
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#77 Athlon cache question
https://www.garlic.com/~lynn/2005c.html#53 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

past postings on vs/repack:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection

using ssl news servers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using ssl news servers
Newsgroups: gnu.emacs.gnus
Date: Tue, 19 Jul 2005 09:23:20 -0600
Anne & Lynn Wheeler wrote:
mozilla/thunderbird configured for ssl/563 reads news and posts just fine.

watching the mozilla/thunderbird messages ... they appear to be using smtp to post rather than nntp

gnus does nttp postings which works (on this service) with standard nttp/119 ... but apparently isn't supported for nntps/563.

Massive i/o

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive i/o
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jul 2005 09:40:45 -0600
CBFalconer writes:
That is relatively easily soluble. Just make sure that the power fail signal appears early enough to flush the cache.

old-time mainframe CKD disks had a problem with this forever ... starting when data was being written directly from processor memory. there was a failure mode where there was loss of power ... and there was sufficient power in the infrastructure for the drive to complete the write ... but not enuf power left to actually pull the data from memory (thru the memory interface, out thru the channel, thru the controller, and out to the disks ... in aggregate this could be a couple hundred feet with various latencies along the way).

the result was that an in-progress write during a power-failure might be completed with the interface supplying all zeros. the disk would then dutifully write correct error-correcting-codes (for the propagated zeros record)... so there wouldn't be an i/o error on subsequent reads.

for the cache-specific case, it wasn't viewed as a problem for the 3880-11/ironwood because it was presumed to be used with transient page data ... which wasn't assumed to survive a power failure.

using ssl news servers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using ssl news servers
Newsgroups: gnu.emacs.gnus
Date: Tue, 19 Jul 2005 09:35:38 -0600
Anne & Lynn Wheeler wrote:
watching the mozilla/thunderbird messages ... they appear to be using smtp to post rather than nntp

gnus does nttp postings which works (on this service) with standard nttp/119 ... but apparently isn't supported for nntps/563.


and with a little more testing ... it appears that in some cases the post actually gets out in nttps/563 case (with nntp-open-ssl-stream) ... it is just that gnus/emacs hangs (forever) waiting for some sort of completion and emacs has to be killed (and the openssl process goes into solid compute loop ... and openssl process has to be killed).

IBM's mini computers--lack thereof

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jul 2005 09:54:20 -0600
sorry for the incomplete post ... i'm trying to get gnus working with nttps/563; it reads news fine but is hanging on posts ... i can't tell for sure whether it hangs before the post was actually done or after the post was sent off and is hanging waiting for some sort of completion.

aka
https://www.garlic.com/~lynn/2005m.html#29 using ssl news servers
https://www.garlic.com/~lynn/2005m.html#31 using ssl news servers

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Massive i/o

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive i/o
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jul 2005 11:46:45 -0600
Eric Sosman wrote:
Disk arrays with hardware RAID controllers usually have auxiliary power supplies (batteries, typically) to maintain the state of their caches and processors in the event of a power outage. These things are frequently sized to keep the bits alive for weeks without external power. I've often wondered why that same amount of stored oomph isn't used to keep the disks spinning for one additional minute, say, in order to finish all the pending writes. Is it just that the disk motors draw a lot more current than I imagine? There's probably a big draw when you bring the disk up to speed from rest, but how much juice does it take to keep the thing turning after it's already up to speed?

however, disk arrays (mirrored, raid5, etc) have tended to be marketed as no-single-point-of-failure. one of my the first things that i would look for when audit hardware raid projects in the early 90s ... we to check if there was redundant electronic memory with independent power supplies for power failure case; especially the raid5 scenarios.

in the raid5 scenario ... you had to read the original record (being updated) along with the parity record. you would subtract out the original record (being changed) and then update the parity record with the new contents. then you had to rewrite both the actual record and the parity record (a simpler approach sometimes was to read the whole raid5 stripe, including parity, change the updated record and then recalculate the parity record from the complete stripe ... and then write both). In several cases, they didn't provide for independent power and electronic copy of the data & parity records that were needed during the write process).

IBM's mini computers--lack thereof

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Tue, 19 Jul 2005 11:57:26 -0600
Anne & Lynn Wheeler wrote:
One of the edge cases involved using electronic memory on a drive for doing some rotational latency compensation (not directly for caching per se); basically data would start transfering to cache as soon as the head was able ... regardless of the position on the track.

ref:
https://www.garlic.com/~lynn/2005m.html#28

there was a hack done in the mid-to-late 70s to address this performance issue w/o having intermediate electronic storage at the drive. it was originally done for database logs on CKD dasd
https://www.garlic.com/~lynn/submain.html#dasd

basically the database log was being written a full-track of data at a time (and commit couldn't complete until the related log records had actually landed on the disk).

the scenario is that CKD dasd allows quite a bit of freedom in formating the records on the track. the standard procedure is to sequentially increment the "ID" portion of the record ... and then when reading (or updating) ... use "search id equal" to locate the specific record to be read or written. The log hack was to format a track with something like 1k byte records ... and sequentially increment the ID field.

However, when going to write the log ... use something like "search id high" to begin writing ... and have one channel I/O program that consecutively wrote as many 1k byte records as had been formated for the track. The "search id high" would be successful for whatever record was the first to rotate under the head ... and then it would consecutively write a full track of records from that position (w/o having to rotate around to a specific track location to start writing full track of data).

On log recovery ... the records had to have some minimal sequence number embedded in the record itself, ... since on a full-track read of a track ... you wouldn't otherwise know the starting write sequence of the records

This approach basically allowed the equivalent of local drive full-track storage for rotational latency compensation ... w/o actually requiring any local memory on the drive.

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Wed, 20 Jul 2005 09:26:50 -0600
Morten Reistad writes:
PCs are getting there.

The original interface for a hard disk was the ST506 interface, never properly standardized; just happened to be reasonably well documented by IBM.

This interface has a ribbon "bus" cable shared between controllers and disk, and "data" cables going directly from controller to disk. It looks like a bowldereized ESMD system, still in need of the occational sacrifice of a small goat to work properly.

Then the engineers revolted, and built SCSI. It took a few generations, but then SCSI became a decent bus with a packet protocol on it. The bus part of it is designed "right" electrically, and that drives cost up. It remains the system of choice where reliability counts.

Enter IDE/ATA. It uses an electrically simple interface, designed for a single disk, with a second "slave" kludged on. It does run a protocol very similar to SCSI on top of this though. It has gone through a dozen or more incremental upgrades, changing names underway. They are pretty good in downward compatability, the number of changes considered. I wouldn't push it though.

Then the electrical interfaces were changed, and ATA went where SCSI should have gone and invented a serial link, dedicated from disk to controller. This brings it all where BAH wants it; to a comms protocol between intelligent devices. A SATA raid controller is electrically simple, and is really only a hybrid multiplexer. It is all SCSI commands and packages on top of a point to point layer.

SCSI is following suit, and is adapting to similar technologies, as well as a fiber optic interface. All as comms protocols.


9333 in the early 90s was pair of of serial copper running packatized SCSI commands at 80mbits/sec ... similar type of concept. it could make use of plain scsi drives with a little bit of electronic interfacing ... which tended to make it a bit more expensive than IDE-based infrastructures. you could even run from controller to a drawer of scsi drives ... where the serial copper interface was between the controller and the drawer full of drives. a 9333 drawer of scsi drives ... had higher aggregate thruput vis-a-vis a drawer of identical scsi drives using standard scsi controller and interface.

we were trying to get it converged with FCS ... so that we could get signal interoperability with serial copper and FCS ... however it eventually went with its own independent standard as SSA

minor SSA reference
https://www.garlic.com/~lynn/95.html#13

Massive i/o

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive i/o
Newsgroups: alt.folklore.computers
Date: Wed, 20 Jul 2005 09:58:08 -0600
jmfbahciv@aol.com wrote:
The problem is a problem depending on how your OS defines writes-completed. I can think of no cases where, in a timesharing system, writes completed need to done even if the disk directory info isn't done. TOPS-10 did not update the directory info until after the write is done and the monitor has been notified. Thus if there's a fault, the user logs in again and sees that he has "lost" the write and starts over. This was better than logging in and not knowing which bits got done and which didn't. People can backup to a previous command. People cannot back up to a previous internal IOWD command list. This is where DEC was superior to IBM.

so the os/360 genre operating systems have been vulnerable to this CKD record write problem with propagated zeros in case of power failure ... because some number of the records were static/fixed location.

the original cms filesystem (circa '66, cp67/cms then morphing to vm370/cms) had sequence were it wrote all the changed filesystem structure data to new disk record location ... and then carefully replaced the MFD (master file directory) record. This worked for all cases except the situation where the power failure occurred while the MFD record was actually being written and a power failure occurred ... resulting in zeros being propagated thru the end of the record (and their would be no error indication). The nominal logic (modulo the partial zero filled MFD record) was that either the MFD pointed to the old copies of all the file structure or the changed/updated records written to new disk locations (rewriting the MFD was effectively a commit like operation ... but was vulnerable to the power failure zero fill problem).

the "EDF" cms filesystem, introduced in the mid-70s, created a pair of MFD records ... and recovery/startup would read both MFD records, determine which was the most current valid MFD. on filesystem updates ... it would write updated/changed records to new disk location (just like the original cms filesystem) but alternate writing the MFD record. This caught various kinds of failure modes occurring during the updating of the MFD record.

public key authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: public key authentication
Newsgroups: comp.security.ssh
Date: Wed, 20 Jul 2005 12:15:49 -0600
"Richard E. Silverman" writes:
So are the long bit strings that are your private keys. Anything is "guessable," in the sense in which you're using it. The issue is whether it is feasible or likely for someone to guess it, and the answer with a long, random password is "no."

The more important point here is the other issues with password authentication, such as:

- it reveals your password to a possibly compromised server

- it makes it more likely for people to select bad passwords, since they aren't using an automated tool like ssh-keygen

- it does not resist MITM attacks

- it is cumbersome to automate


these are somewhat at the micro-level ... aka a security officer looking at passwords as a form of shared-secrets (or, if you will, an institutional-centric viewpoint):
https://www.garlic.com/~lynn/subintegrity.html#secrets

... there is also the issue of shared-secrets being required to be unique for every unique security domains. in the past when a person was involved in one or two different security domains .. they had only one or two shared-secrets to memorize. now, it is not uncommon for people to have scores of unique shared-secrets that they have to memorize. taking the person-centric view ... this also has resulted in reaching human factor limitations when humans now have to make some sort of record of their scores of shared-secrets (because most humans don't have the capacity to otherwise deal with the situation). The necessity for resorting to some sort of recording infrastructure for tracking the scores of shared-secrets opens up additional threats and vulnerabilities.

the other compromise ... is some numbe of infrastructures, finding that humans have a difficult time keeping track of unique, infrastructure shared-secrets ... are resorting to common information that is known by the individual, like date-of-birth, mother's maiden name, social security number, etc. this violates fundamental security guidelines (but recognizes that there are common human limitations) ... and as led to a lot of the current identity theft situations.

the institutional centric model doesn't allow from human limitations having to deal with scores of different security domains, each requiring their unique shared-secret for authentication. the person centric model recognizes that individuals when dealing with scores of unique security domains, each requiring unique shared-secrets, isn't a practical paradigm for people.

the basic asymmetric key technology allows for one key (of a key-pair) for encoding information with the other key decoding the information (as opposed to symmmetric key technology where the same key is used for both decoding and encoding).

there is a business process called public key ... where one key (of a key pair) is identified as public and freely distributed. The other key (of the key pair) is identified is private, kept confidential and never divulged.

there is a business process called digital signature ... where the hash of a message (or document) is calculated and then encoded with the private key producing a "digital signature". the recipient then recalculates the hash of the message, decodes the digital signature (with the correct public key, producing the original hash), and compares the two hash values. If the two hash values are the same, then the recipient can assume

1) the message/text hasn't been modified since being digitally signed

2) something you have authentication ... aka the originator has access to and use of the corresponding private key.

From 3-factor authentication:
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


... shared-secrets can be considered a form of something you know authentication and digital signatures a form of something you have authentication.

The integrity of digital signature authentication can be improved by using certified hardware tokens where the key pair is generated on the token and the private key is protected from every leaving the token. Where a private key is certified as only existing in a specific kind of hardware token ... then digital signature verification can somewhat equate the access and use of the private key as equivalent to the access and use of the hardware token (of known integrity characteristics).

There has been some amount of stuff in the press about the benefits of two-factor authentication (possibly requiring both something you have and something you know). The issue really comes down to whether the two factors are resistant to common vulnerability and threats. An example is PIN-debit payment cards which are considered two-factor authentication ... i.e. the something you have magstripe card and the something you know PIN (shared-secret).

The issue is that some of the ATM-overlay exploits can record both the magstripe and the PIN ... resulting in a common vulnerability that allows production of counterfeit card and fraudulent transactions. The supposed scenario for two-factor authentication is that the different factors have different vulnerabilities (don't have common threats and vulnerabilities). Supposedly, the original PIN concept was that if the card was lost or stolen (a something you have vulnerability), then the crook wouldn't also have access to the required PIN. Note, however, because of human memory limitations, it is estimated that 30precent of PIN-debit cards have the PIN written on them ... also creating a common threat/vulnerability.

public key hardware tokens can also require a PIN to operate. However, there can be significant operational and human factors differences between public key hardware tokens with PINs and a PIN-debit magstripe cards:

1) the PIN is transferred to the hardware token for correct operation, in the sense that you own the hardware token ... and the PIN is never required by the rest of the infrastructure, it becomes a "secret" rather than a shared-secret

2) in a person-centric environment, it would be possible to register the same publickey/hardware token with multiple different infrastructures (in part because, the public key can only be used to verify, it can't be used to impersonate). this could drastically minimize the number of unique hardware tokens an individual would have to deal with (and correspondingly the number of PINs needed for each unique token), possibly to one or two.

An institutional centric environment would issue a unique hardware token to every individual and require that the individual choose a unique (secret) PIN to activiate each token ... leading to a large number of PINs to be remembered and increases the probability that people would write the PIN on the token. A very small number of tokens would mean that there would be a very small number of PINs to remember (less taxing on human memory limitations) as well as increase the frequency that the limited number of token/PINs were repeatedly used (re-inforcing the human memory for specific PIN).

Substituting such a hardware token in a PIN-debit environment ... would still leave the PIN vulnerabile to ATM-overlays that skim that static data; but the hardware token wouldn't be subject to counterfeiting ... since the private key is never actually exposed. In this case, the two-factors are vulnerable to different threats .... so a single common attack wouldn't leave the individual exposed to fraudulent transactions. The PIN makes the hardware token resistant to common lost/stolen vulnerabilities and the hardware token makes the PIN resistant to common skimming/recording vulnerabilities.

Encrypted software file private key implementations have some number of additional vulnerabilities vis-a-vis a hardware token private key implementation ... aka the compromise of your personal computer. Normally the software file private key implementation requires a PIN/password to decrypt the software file ... making the private key available. A compromised personal computer can expose both the PIN entry (key logging) and the encrypted private key file (allow a remote attack to optain the encrypted file and use the pin/password to decrypt it).

Note that the original pk-init draft for kerberos specified the simple registration of public key in lieu of passwords and digital signature authentication ... in much the same way that common SSH operates ...
https://www.garlic.com/~lynn/subpubkey.html#certless

and w/o requiring the expesne and complexity of deploying a PKI certificate-based operation
https://www.garlic.com/~lynn/subpubkey.html#kerberos

similar kind of implementations have been done for radius ... where public key is registered in lieu of password ... and straight-forward digital signature verficiation performed ... again w/o the complexity and expense of deploying a PKI certificate-based operation
https://www.garlic.com/~lynn/subpubkey.html#radius

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Massive i/o

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive i/o
Newsgroups: alt.folklore.computers
Date: Wed, 20 Jul 2005 12:45:31 -0600
... note that the journal file system for aixv3 (started in the late 80s) took a database logging approach to filesystem metadata.

in the cms filesystem case ... the records containing changed filesystem metadata was written to new disk record location ... and then the MFD was rewritten as a kind of commit to indicate the new metadata state ... as opposed to the old metadata state. the EDF filesystem in the mid-70s, updated the original cms filesystem (from the mid-60s) to have two MFD records ... to take care of the case where there was a power failure and a write error of the MFD record happening concurrently.
https://www.garlic.com/~lynn/2005m.html#36 Massive i/o

the aixv3 filesystem took a standard unix filesystem ... where metadata information had very lazy write operations and most fsync still had numerous kinds of failure modes ... and captured all metadata changes as they occurred and wrote them to log records ... with periodic specific commit operations. restart after an outage was very fast (compared to other unix filesystems of the period) because it could just replay the log records to bring the filesystem metadata into a consistent state.

there was still an issue with incomplete writes on the disks of the period. the disk tended to have 512byte records and the disks were defined to either perform a whole write or not do the write at all (even in the face of power failure). The problem was that a lot of the filesystems were 4kbyte page oriented ... and consistent "record" writes met a full 4k ... involving eight 512byte records ... on device that only guaranteed the consistency of a single 512byte record write (but there could be inconsistency where some of the eight 512byte records of a 4k "page" were written and some were not).

a lot of ha/cmp was predicated on having fast restart
https://www.garlic.com/~lynn/subtopic.html#hacmp

Massive i/o

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive i/o
Newsgroups: alt.folklore.computers
Date: Wed, 20 Jul 2005 13:11:27 -0600
Anne & Lynn Wheeler wrote:
there was still an issue with incomplete writes on the disks of the period. the disk tended to have 512byte records and the disks were defined to either perform a whole write or not do the write at all (even in the face of power failure). The problem was that a lot of the filesystems were 4kbyte page oriented ... and consistent "record" writes met a full 4k ... involving eight 512byte records ... on device that only guaranteed the consistency of a single 512byte record write (but there could be inconsistency where some of the eight 512byte records of a 4k "page" were written and some were not).

the disks had another problem involving automatic hardware write error recovery with automatic low level rewrite recovery using spare areas on the surface. worst case recovery could take something like 30 seconds ... which wasn't guarantee to complete in the middle of a power failure .... only standard writes w/o any hardware recovery could be considered to complete (aka the drives didn't totally eliminate the situation from occurring ... just lowered the probability)

capacity of largest drive

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: capacity of largest drive
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 20 Jul 2005 15:02:42 -0600
gilmap writes:
Etc. I suspect that those who say increasing the 3390 values to anything larger would introduce compatibility problems are right. Isn't it time to stop taking small steps, which are incompatible anyway, and progress to FBA, preferably with 64-bit addressing?

circa 1980 when i suggested that ... stl told me that even if i provided them fully tested and integrated code ... it was still cost something like $26m to ship in an MVS release (i guess pubs, education, ???).

the business case issue at the time was that there wasn't any demonstration that additional disk sales would happend (i.e. the prevailing judgement would that it might just convert some ckd sales to fba sales?).

the argument: that over the years ... the costs of not having shipped fba support would be far greater than the cost of shipping fba support in the early 80s ... and the likelyhood was that fba support would eventually have to be shipped anyway ... didn't appear to carry any weight.

random past dasd posts:
https://www.garlic.com/~lynn/submain.html#dasd

random past posts related to working with bldg 14 (dasd engineering) and bldg. 15 (dasd product test lab)
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Thu, 21 Jul 2005 07:29:15 -0600
Joe Morris wrote:
Back when I was running a mainframe data center I personally ran an endurance test on each UPS box (which at one point was three 45-KVA units and one 100-KVA box, all from Exide) every December. In at least one test I found that one of the 45-KVA boxes, even though its load was probably around 20 KVA, gave me all of 45 seconds endurance before the low-battery warning sounded...and the box still had two weeks on its original warranty. IIRC, that saved me about $5K of replacement batteries because Exide replaced them all under warranty.

a couple past PDU posts (somewhat UPS related):
https://www.garlic.com/~lynn/2000b.html#85 Mainframe power failure
(somehow morphed from Re: write rings)
https://www.garlic.com/~lynn/2001.html#61 Where do the filesystem and
RAID system belong?
https://www.garlic.com/~lynn/2002g.html#62 ibm icecube -- return of watercooling?

public key authentication

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: public key authentication
Newsgroups: comp.security.ssh
Date: Thu, 21 Jul 2005 09:34:42 -0600
Darren Tucker writes:
I disagree with this part. In general a private key is another instance of something you know, albeit one with the useful property of being able to prove you know it without disclosing it to the other party.

The private key is usually just a collection of bits and can be copied, disclosed or published.

If it was encapsulated inside, eg, a suitably tamper-proof smartcard then it could form part of something you have (as you later noted).

The rest of the post is very good stuff, but my point is posession of a particular private key only represents something you have in very specific circumstances, and those circumstances aren't commonly present in SSH deployments.

One could even view the ATM skimmers you refer to as converting something you have factors (the card) into something you know (the content of the magstripe), resulting in a significant weakening of the system.


very few people "know" (aka remember) a private key ... they can't be asked to reproduce such a key for authentication purposes (even tho it might be theoretically possible); it would be generally considered less likely than expecting people to remember scores of hard-to-guess, complex passwords that change monthly. as such, a "private key" is nominally handled in some sort of container or other kind of object (even tho it may be a software container of bits) ... which potentially can be analyzed and copied (even if only software abstractions, they tend to treated as objects that somebody would carry around ... as opposed to something they would remember).

I would contend that the operational deployment and use of private keys tends to come more closely to approximating the something you have paradigm than the something you know paradigm ... even tho they are purely electronic bits. Even w/o a real hardware token container ... using only a software container ... the mechanics of the software container tends to approximate the operational characteristics of a physical object ...

so while it is theoretically possible to "know" a private key ... I contend all the deployments of private key infrastructures try their best to approximate a "have" paradigm.

Similarly the magstripe can be analyzed and copied ... generating counterfeit cards & magstripes. However, the account number from the magstripe can be extracted and used in fraudulent MOTO transactions. I know of no private key operational deployments providing for a mechanism for human communication of the private key ... all the deployments make use of the private key in some sort of container ... even if it is only the software simulation of a physical object.

The big difference between public key deployments and lots of the account fraud that has been in the press ... in the case of credit payment cards ... communicating the account number is sufficient to initiate fraudulent MOTO transactions ... and the account number is also required in lots of other merchant and processing business processes. The requirement that the account number be readily available for lots of business processes (other than originating the transactions ... and at the same time is the basis for authenticating a transaction origination.

From the security PAIN acronym
P ... privacy (or sometimes CAIN & confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation


the multitude of business processes (other than transaction origination) that require access to the account ... result in a strong security integrity requirement ... but a relatively weak privacy requirement (since the account number needs to be readily available).

the conflict comes when knowledge of the account number is also essentially the minimum necessary authentication mechanism for originating a transaction ... which then leads to a strong security privacy requirement.

the result is somewhat diametrically opposing requirements ... requiring both weak and strong confidentiality, simultaneously.

By contrast, in a public key infrastructure, a digital signature may be carried as the authentication mechanism for a transaction and a public key is onfile someplace for validating the digital signature. Neither the digital signature nor the public key can be used for originating a new transaction.

In the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

when mapped to iso8583 (credit, debit, stored value, etc) transaction, a digital signature is used for authentication.
https://www.garlic.com/~lynn/8583flow.htm

furthermore, the standard defines that account numbers used in x9.59 transactions are not valid for use in non-authenticated (non-x9.59) transactions.

a public key onfile with a financial institution can be used for validating a x9.59 digital signature ... but can't be used for originating a transaction ... resulting in a security integrity requirement but not a security confidentiality requirement.

the transaction itself carries both the account number and the digital signature of the transaction. the digital signature is used in the authentication process ... but can't be used for originating a new transaction ... and therefor there is no security *confidentiality* requirement for the digital signature.

The account number is needed for x9.59 transaction for a multitude of business processes, but no longer can be used, by itself, for origination of a fraudulent transaction ... eliminating any security confidentiality requirement for the account number.

Another analogy is that in many of the existing deployments, the account number serves the function of both userid and password, leading to the conflicting requirements of making the userid generally available for userid related business process ... and at the same time, the same value is used as a form of authentication password, resulting in the confidentiality and privacy requirements.

X9.59 forces a clear separation between the account number as a "userid" function and the digital signature as a "password" (or authentication) function.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Thu, 21 Jul 2005 14:05:29 -0600
glen herrmannsfeldt writes:
For shared access as far as I know it is usual to disallow partial word or unaligned access. S/370 has CS and CDS, 32 and 64 bit compare and swap, and both require alignment even though ordinary S/370 instructions don't.

note that CAS was doing fine grain smp locking work on 360/67 with cp67 at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and invented compare&swap (chosen because CAS are his initials).

there was a push to get the instruction into 370 ... but there was push back from the 370 architecture owners in pok ... saying that a new instruction for purely SMP use couldn't be justified. To justify getting CAS into 370 ... we had to come up with justifications that were applicable to non-SMP environment ... thus was born the examples of serializing/coordinating multi-treaded applications (whether they ran on in non-SMP or SMP environments ... the issue was having an atomic instruction storage update where one thread might interrupt another thread operating in the same memory space). thus was born the CAS programming notes. As part of incorporating CAS into 370 ... a full-word (CS) and double word (CDS) versions were defined.

In later versions of the principles of operation, the CAS programming notes were moved to appendix (nominally POP programming notes were part of the detailed instruction operation notes).

current descriptions:

compare and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.28?SHELF=DZ9ZBK03&DT=20040504121320

compare double and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.29?SHELF=DZ9ZBK03&DT=20040504121320

appendix a.6: multiprogramming (multi-thread by any other name) and multiprocessing examples (old programming notes):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320

the (newer) perform locked operation (PLO) instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.107?SHELF=DZ9ZBK03&DT=20040504121320

... misc. other compare&swap and smp postings:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TLAs - was summit else entirely

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLAs - was summit else entirely
Newsgroups: bit.listserv.ibm-main
Date: Thu, 21 Jul 2005 14:22:09 -0600
Chase, John wrote:
Des Plaines, Illinois, to be exact: Approximately two miles (3.2 km) west of my office.

https://web.archive.org/web/20060325095552/http://www.yelavich.com/history/ev196803.htm


i was undergraduate at university that was beta-test site for cics in 1969. it was for use in onr-funded library project. i remember shooting an early bug involving BDAM OPEN failure. the original CICS code had been written for specific BDAM file environment ... and the univ. library was using different BDAM file options. I was able to patch some of the code to get past the problem.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Digital ID

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: microsoft.public.exchange.admin
Subject: Re: Digital ID
Date: Thu, 21 Jul 2005 20:05:52 -0700
Emyeu wrote:
Exchange 2003 & Outlook 2003. To encrypt email message Digital ID is required. How to get digital ID from Exchange server instead from external certification authority?

the technology is asymmetric key cryptography ... what one key encodes, the other key decodes (as opposed to symmetric key where the same key both encrypts and decrypts).

there is a business process called public key ... where one of the key-pair is identified as "public" and made widely available. The other of the key-pair is identified as "private" and kept confidential and never divulged.

there is a business process called digital signature ... where the originator calculates the hash of a message, encodes it with the private key producinng a digital signature, and transmits both the message and the digital signature. the recipient recalculates the hash on the message, decodes the digital signature with the public key (producing the original hash) and compares the two hashes. If they are equal, then the recipient can assume that

1) the contents haven't changed since the original digital signature

2) something you have authentication, i.e. the originator has access to and use of the corresponding private key.

PGP-type implementations involve the senders and receviers having a trusted repositories of public keys. The senders can use their private key to digital sign messages and transmit them to the recipients. The recipients can authenticate the sender by verifying the digital signature with the corresponding public key. Senders can also use on-file public key for the recipient to encode the message being sent (so only the addressed recipient can decrypt the message with the specific private key). Some actual message encryption implementations may be a two-step process where a random symmetric key is generate, the message encrypted with the random symmetric key and the random symmetric key then encoded with the recipient's public key. The recipient then uses their private key to decode the random symmetric key, and then uses the decoded random symmetric to decrypt the actual message.

In the SSL implementation used by browsers for encrypted communication, digital certificates are introduced.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

These are special messages containing the public key of the server and their domain name which is digital signed by certification authorities. Users have their trusted repositories of public keys loaded with the public keys of some number of certification authorities (in the case of many browsers these certification authority public keys have been preloaded as part of the browser creation). A Server has registered their public key and domain name with some certification authority and gotten back a digital certificate (signed by the certification authority)

The client browser contacts the server with some data. The server digital signs the data and returns the digital signature and their domain name digital certificate. The client browser inds the correct public key in their local repository and verify the certification authority's digital signature. If they certification authority's digital signature verifies, then the client assumes that the content of the digital certificate is correct. The client browser then checks the domain name in the digital certificate against the domain name used in the URL to contact the server (if they are the same, then the client assumes that the server they think they are talking might actually be the server they are talking to). The client browser can now use the server's public key (also contained in digital certificate) to validate the returned server's digital signature. If that validates, then the client has high confidence that the server they think they are talking to is probably the server they are talking to. The browser now generates a random symmetric key and encocdes it with the server's public key (taken form the digital certificate) and sends it to the server. When the server decrodes the random symmetric key with their private key ... then both the client and server have the same random symmetric key and all futher communication between the two is encrypted using that random symmetric key.

So the basic starting point is that the sender has to already have the recipient's public key in some locally accessible place. In the normal email scenario this tends to be a long term repository where the sender may collect before hand the public keys of recipients that they wish to securely communicate with. There are also a number of public-key server implementations ... where senders can obtain recipient public keys in real time.

In the SSL dynamic session scenarion ... the server's public key is provided as part of the two-way session initiation (although, the client browser still needs a trusted repository of public keys ... in this case at least for some number of certification authorities ... so that the dynamically obtained digital certificate containing the server's public key can be verified).

In a number of implementations ... the term "digital IDs" is used interchangeably with digital certificates ... and digital certificates can represent one source of obtaining recipient's public key.

However, when encrypting messages ... the sender isn't encoding with either their public or private keys ... they are encoding with the recipient's public key. If the sender doesn't already have the recipient's public key on file ... it is possible that the reicpient has registered their public key with some public key repository server ... and the sender can obtain the recipient's public key, in real-time, from such a server.

IBM's mini computers--lack thereof

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Thu, 21 Jul 2005 23:01:14 -0600
Jack Peacock wrote:
Latency is a problem no matter what the physical layer on the interface. So SCSI has command tag queues and scatter-gather capability, to minimize turnaround on the data bus. Transitioning between states on SCSI is relatively expensive, but once the data transfer state begins it's fairly efficient at transferring lots of data quickly. You can see the influence on IDE SATA drives as they are now moving to the same command structure as SCSI: packets, command queues, and (an improvement in thruput over SCSI) one channel per drive.

To some extent propagation time is hidden by seeks. Since there are three or more orders of magnitude in delays between the two, the time for the command packet to get to the drive isn't that big a deal. In some cases the command is going to a cluster or raid controller anyway, followed by multiple command streams to several drives in parallel, so the propagation time may not represent much in the overall execution time.

The original 5MB/sec SCSI narrow bus did have timing restrictions that reduced some devices to 2-3MB/sec if there was a lot of bus state activity, even though there were no cable length issues.


1991 or so was 9333 with scsi commands over 80mbit serial copper to 9333 drawers. 9333 had significantly higher thruput than "real" scsi bus to 9333 drawers with drives having effectively the identical operational characteristics. a big difference between scsi over 9333 serial coppoer vis-a-vis real scsi ... was that scsi bus command processing was synchronous ... while 9333 command processing was asynchronous (baiscally dual-simplex over the pair of serial copper cable ... one dedicated for transmission in each direction).

similar asynchronous operation was defined for SCSI commands over FCS (fiber channel standard) and SCI (scalable coherent interface). minor past reference
https://www.garlic.com/~lynn/95.html#13

somewhere long ago and far away, I did a number of A/B 9333/9334 comparisons of large number of concurrent operations to multiple drives.

we were advocating that 9333 serial copper become interoperable with FCS ... instead it was turned into SSA. SSA reference
http://www.matilda.com/hacmp/ssa_basics.html

and lots of HA/CMP references
https://www.garlic.com/~lynn/subtopic.html#hacmp

a paper on SSA performance:
http://citeseer.ist.psu.edu/690893.html

a couple SSA redbook references:
http://www.redbooks.ibm.com/abstracts/sg245083.html
http://www.redbooks.ibm.com/redbooks.nsf/0/8b81ae28433f2cba852566630060b942?OpenDocument

Code density and performance?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Thu, 21 Jul 2005 23:53:42 -0600
glen herrmannsfeldt wrote:
The OS/360 and MVS linker start everything on a doubleword boundary. Not for this reason, but because that is the only way programs can be sure that data that should be doubleword aligned is doubleword aligned.

As I understand it, the MS malloc() didn't supply a doubleword aligned result for way longer than it should have (for speed on 64 bit floating point). It may be that MS linkers did/do the same thing. The 386 and 486 had a 32 bit data bus so there was no need until the pentium.

They should at least do word (32 bit) alignment, and the original Alpha had 32 bit store.


sometime in the early 80s ... early in 3084 time-frame ... MVS and VM had their smp kernel restructured to align kernel (static and dynamic) stuff on cache boundaries and in multiples of cache line size (to eliminate a lot of cross cache thrashing). supposedly the result was something better than five percent improvement in overall performance.

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Sun, 24 Jul 2005 10:53:55 -0600
forbin@dev.nul (Colonel Forbin) writes:
I think Nick has a valid point here. RISC became an idology, not a practical architectural direction with respect to "real work." The early RISC proponents went way too far in pruning the hardware instruction set, so they had to essentially backpedal. Some of the early RISC hardware was abysmal as a consequence, including especially the early Alphas, particularly if you were running OSF/1.

OTOH, the VAX was way too far out on the CISC end of things, on the tail end of the old assumptions that "hardware" instructions were always faster long after hardwired CPUs like the H6000 had become a thing of the past.


801/risc circa 1976 had no protection domains ... one of the software/hardware tradeoffs was that the compiler and link/loader would validate the correctness of the program ... and once loaded ... the program would be allowed to execute any instruction.
https://www.garlic.com/~lynn/subtopic.html#801

the shared segment paradigm also made trade-offs on the basis of hardware simiplicity.

there was this advanced technology confernce in pok ... and we were presenting 16-way (370) smp and the 801 group was presenting 801/risc. somebody from the 801 group started critisizing the 16-way smp presentation because they had looked at the vm370 code and said that the vm370 code that they had looked at contained no support for smp support ... and therefor couldn't be used to support a 16-way smp implementation. the counter claim was (effectively) that the basic support was going to be something like 6000 lines of code ... i had already done the VAMPS 5-way design in 75 based on modifications and moving most of the affected code into the microcode of the hardware
https://www.garlic.com/~lynn/submain.html#bounce

when VAMPS got killed, i had done a design that moved the affected code back from microcode into low-level software.
https://www.garlic.com/~lynn/subtopic.html#smp

so when the 801 group started presenting, i pointed out that the virtual memroy segment architecture had been moved from tables into 16-registers ... which severely limited the number of concurrent shared objects that could be defined in a virtual memory space at any moment.

the counter-argument was that 801/risc represented a software/hardware trade-offs where significant hardware simplicity was compensated for by significant increase in software complexity. that there was no protection domains in 801/risc and that an application program was going to be able to change segment register values as easily as they could change general purpose address registers. that program correctness would be enforced by the compiler (for generating non-violating code) and the link/loader that would only enable correctly compiled code for execution. basically this came down to the cp.r operating system and the pl.8 compiler.

so my counter-argument was that while they effectively argued that it was going to be impossible for us to make 6000 line code change to the vm370 kernel ... it appeared like they were going to have to write a heck lot more than 6000 lines of code to achieve the stated cp.r and pl.8 objectives.

later in the early 80s ... ROMP was going to be used with cp.r and pl.8 by the office products division for a displaywriter follow-on product. when that product was killed ... it was decided to retarget ROMP displaywriter to the unix workstation market. something called the virtual resource manager was defined (somewhat to retain the pl.8 skills) and a unix at&t port was contracted out to the company that had done the pc/ix port to the ibm/pc (with the unix being ported to an abstract virtual machine interface supplied by the virtual resource manager). The issue here was that hardware protection domains had to be re-introduced for the unix operating system paradigm. this was eventually announced as the pc/rt with aix.

however, the virtual memory segment register architecture wasn't reworked ... which then required kernel calls to change segment register values for different virtual memory objects (and inline application code could no longer change segment register values as easily as they could change general purpose address register pointers) ... and the limited number of segment registers then, again became an issue regarding the limited number of concurrent virtual memory objects that could be specified concurrently.

To somewhat compensate for this limitation there was later work on virtual memory shared library objects ... where aggregations of virtual memory objects could be defined and virtual memory segment registers could point to an aggregated object (containing large number of individual virtual memory objects).

the original 801 architecture ... because the ease in which inline application program could change segment register values ... was frequently described as having a much larger virtual address space than 32bits. the concept was that while original 370 was only 24-bit addressing ... where there were only actual 15 general purpose registers that could be used for address pointers ... and any one address pointer could only address up to 4k of memory ... so actual addresaability by a program at any one moment (w/o changing general purpose register pointer) was 15*4k = 60k. However, an application program could change a pointer to be any value within 24bit addressing.

so while romp was 32bit virtual addressing ... and 16 virtual memory segment registers each cable of addressing 28bits (28bits * 16 also equal 32bits) ... the original 801/romp design allowed inline application program to change segment register values to point to any one of 4096 (12bit) segments. the result was that 801/romp was described as having 28bit * 4096 addressing ... or 40bit addressing.

The later RIOS/power chip up'ed the number of segment register values to 16meg (24bit) ... and even tho hardware protection domains (in support of unix programming paradigm) no longer allowed inline application code to change virtual memory segment register values ... you still saw some descriptions of rios/power having 28bit * 16meg or 52bit virtual addressing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Mon, 25 Jul 2005 12:11:30 -0600
prep writes:
If you go WAY back, DEC did an industrial interface called the ICS-11. Its evil twin was the ICR-11, and you could have up to 1 mile, it was claimed, between computer and the sharp end. In fact it was a ICS-11 and a unibus extender linked by about a 1 Mbit/sec link!

In principle, you could have included other controllers at the remote end as well.


thornton and cray both worked on cdc computers. cray left to form cray. thornton left to form network systems ... they produced quite a few "network" adapters ... that allowed interoperability between a wide range of different processors at high-speed (late '70s ... sort of high-speed local area networks at tens of megabits ... standard adapter could have 1-4 50mbit connections). they also produced telco adapters that allowed bridging to remote sites, 710, 720, 715. The 720 was sort of a pair of 710s ganged together originally for a dual-simplex satellite application done for my wife (aka sort of like a lot of the current day dual-simplex serial interfaces that simulate full duplex over pairs of serial cable ... copper &/or fiber).

some amount of that was eventually rolled into the high speed data transport project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

later there was tcp/ip router support specified by rfc1044:
https://www.garlic.com/~lynn/subnetwork.html#1044

network systems has since been acquired by stk .... however they still retain the domain name: network.com.

related posts on serial interfaces
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005m.html#34 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cluster computing drawbacks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Mon, 25 Jul 2005 12:16:46 -0600
"Emidio S." writes:
could someone please help me to understand which are the drawbacks of cluster computing based architecture? because often i read only about benefits... thank you

homework assignment, right?

clusters can be used for scalable computing (thruput) and/or redundant computing (availability). we did some of both when we were doing the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cluster computing drawbacks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Tue, 26 Jul 2005 09:39:41 -0600
Ketil Malde wrote:
ยน People who were raised on punched cards may at this point feel free to vent their annoyance over the spoiled youth of today, who expect interactivity.

my frequent observation is that the batch system developed from a paradigm that the person wasn't likely to be present ... and so infrastructures (sometimes with somewhat steap learning curves) evolved for being able to specify and control operation in the abscence of having the actual person responsible for the application present.

the interactive stuff frequently tended to punt on such issues ... pushing the conditions out externally assuming that there was the responsible human on the other side of the display and they could decide on how to handle the condition.

while there are a huge amount of stuff that involves people interacting with a keyboard and display (say an internet browser) .... the operational characteristics of servers are much more inline with that of batch systems paradigm ... aka the person at the browser doesn't also tend to have direct control over the server. so the claim is that while lots of people have direct contact with systems that evolved from interactive paradigm ... most of the world still turns on environments where it isn't expected that the responsible human is present and directly in control.

Cluster computing drawbacks

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Tue, 26 Jul 2005 14:06:12 -0600
glen herrmannsfeldt writes:
For programs where the runtime is measured in days it is a pretty good bet that most of the time there isn't someone there waiting for the next prompt.

It is nice to be notified when it ends, though.


wasn't exactly what i had in mind, ... there are a lot of (mega) "on-line" applications run in batch environments ... because the batch environments have evolved a lot of "automated" conventions for handling numerous types of events (rather than pushing them to the end user, common in interactive system). these on-line environments tend to make use of these automated facilities to help provide 7x24, continuous operation.

several years ago, we were talking to one of the major financial transaction systems ... which commented that they attributed their one hundred percent availability over the previous several years primarily to
• ims hot-standby
automated operator


when my wife did her stint in pok (batch mainframe land) responsible for loosely-coupled (i.e. cluster by any other name) architecture ... she came up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

the first organization that really used it was ims group for ims hot-standby.

batch systems tended to have some residual direct human involvement, in the early days for tending printers, card readers, tape drives, etc (i.e. called operators).

during the early 70s, i started developing automated processes for performing many of the tasks that the operating system nominally required of operators.

starting in the early 80s ... you started to see the shift from hardware being the primary source of failures to software and people being the primary source of failures. automated operator went a long way to reducing many of the human mistake related failures.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Barcode Email

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Barcode Email
Newsgroups: sci.crypt,alt.privacy,comp.security.misc
Date: Tue, 26 Jul 2005 13:48:41 -0600
Jean-Luc Cooke writes:
1) there is no AES512
2) AES-128/192/256 is used everywhere. Many browsers now have it. Many web sites now require it. Many banks use it in ATMs. Many many companies use it for VPNs. Wireless protocol WPA uses it. There is no end to the number of applications.

If you're trying to make the point "AES alone isn't worth shit". Then you'll find no greater supporter. You need a security application first.

But a security application which protects data in transit that doesn't use strong encryption like AES is worth even less than shit(tm). Not because of some geeky perpeller-head mantra. But because the only thing worse then the lack of security is the lack of security under the perception of security.


in the security PAIN acronym:
P ... privacy (or sometimes CAIN, confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation


encryption has sometimes been interchangeably used for privacy/confidentiality, authentication, and/or integrity.

typically encryption is nominal considered a confidentiality or privacy tool.

the x9a10 financial standards working group was given the task of preserving the integrity of the financial infrastructure for all retail payments ... which resulted in X9.59 standard ... applicable to credit, debit, stored-value, internet, point-of-sale, atm, etc.
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

basically, x9.59 defines a light-weight message (a couple additional fields over what might be found in a standard iso 8583 message used for either credit or debit) that is digitally signed. the digital signature provides for integrity and authentication w/o actually requiring encryption.

one of the big issues in non-x9.59 transactions has been that the transaction can be evesdropped and the information is sufficient to originate a fraudulent transaction (giving rise to enormous confidentiality requirement as countermeasure to evesdropping).
https://www.garlic.com/~lynn/subintegrity.html#harvest

part of x9.59 standard is a business rule that information from x9.59 transactions aren't valid in non-x9.59 &/or non-authenticated transactions. the business rule is sufficient countermeasure to evesdropping vulnerability that results in fraudulent transactions i.e. prime motivation for encryption has been reducing the evesdropping vulnerabilities that can lead to fraudulent transactions, which x9.59 addresses with

1) digital signatures for integrity and authentication

2) business rule that eliminates evesdropping use of information for fraudulent transactions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Barcode Email

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Barcode Email
Newsgroups: sci.crypt,alt.privacy,comp.security.misc
Date: Tue, 26 Jul 2005 21:27:33 -0600
"Luc The Perverse" writes:
No, you still have a private key stored on the person's computer . . . just don't password encrypt it. Sure, someone can steal the key very easily, but if someone has access to the computer, they could just as easily install a keylogger too :P

however, there are lots of press on laptops walking off, one way or another (and supposedly laptops are now exceeding desktops in some measure).

pins/passwords can be single factor (something you know) authentication ... but they are also used as countermeasure for lost/stolen vulnerability involving something you have authentication (aka the infrastructure around private key tends to approx. something you have ... involving some sort of software or hardware container for the private key)

some of the more recent tv advertisements have either biometrics or pin/password (protecting the whole machine) as countermeasure to laptop lost/stolen vulnerability.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

54 Processors?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 54 Processors?
Newsgroups: bit.listserv.ibm-main
Date: Wed, 27 Jul 2005 00:18:03 -0600
edgould@ibm-main.lst (Ed Gould) writes:
Now I know I am out of date on this but somewhere in the mists of time, I could swear that IBM came out saying that anthing above 18(??? this is a number I am not sure of) was not good, in fact it was bad as the interprocessor costs was more overhead than they were worth. They sited some physics law (IFIRC) .

Did IBM rethink the "law" or are they just throwing 54 processors out hoping no one will order it?

My memory is cloudy but I seem to recall these statements around the time of the 168MP.


a big problem was strong memory consistency model and cross-cache invalidation model. two processor smp 370 cache machines ran at .9 times cycle of a single processor machine ... to allow for cross-cache invalidation protocol chatter (any actual invalidates would slow the machine down even further). this resulted in basic description that two processor 370 hardware was 1.8times (aka 2*.9) of a uniprocessor ... actual cross-cache invalidation overhead and additional smp operating system overhead might make actual thruput 1.5 times a uniprocessor machine.

we actually had a 16-way 370/158 design on the drawing boards (with some cache consistency slight of hand) that never shipped ... minor posting reference:
https://www.garlic.com/~lynn/2005m.html#48 Code density and performance?

3081 was supposed to be a native two-processor machine ... and there never originally going to be a single processor version of the 3081. eventually a single processor 3083 was produced (in large part because TPF didn't have smp software support and a lot of TPF installations were saturating their machines ... some TPF installations had used vm370 on 3081 with a pair of virtual machines ... each running a TPF guest). the 3083 processor was rated at something like 1.15 times the hardware thruput of one 3081 processor (because they could eliminate the slow-down for cross-cache chatter).

a 4-way 3084 was much worse ... because each cache had to listen for chatter from three other processors ... rather than just one other processor.

this was the time-frame when vm370 and mvs kernels went thru restructuring to align kernel dynamic and static data on cache-line boundaries and multiples of cache-line allocations (minimizing a lot of cross-cache invalidation thrashing). supposedly this restructuing got something over five percent increase in total system thruput.

later machines went to things like using a cache cycle time that was much faster than rest of the processor (for handling all the cross-cache chatter) and/or using more complex memory consistency operations ... to relax the cross cache protocol chatter bottleneck.

around 1990, SCI (scalable coherent interface) defined a memory consistency model that supported 64 memory "ports".
http://www.scizzl.com/

Convex produced the exampler using 64 two-processor boards where the two processors on the same board shared the same L2 cache ... and then the common L2 cache interfaced to the SCI memory access port. This provided for shared-memory 128 (HP RISC) processor configuration.

in the same time, both DG and Sequent produced a four processor board (using intel processors) that had shared L2 cache ... with 64 boards in a SCI memory system ... supporting shared-memory 256 (intel) processor configuration. Sequent was subsequently bought by IBM.

part of SCI was dual-simplex fiber optic asynchronous interface ... rather than single, shared synchronous bus .... SCI defined bus operation with essentially asynchronous (almost message like) operations being performed (somewhat latency and thruput compensation compared to single, shared synchronous bus).

SCI had definition for asynchronous memory bus operation. SCI also has definition for I/O bus operation ... doing things like SCSI operations asynchronously.

IBM 9333 from hursley had done something similar with serial copper ... effectively encapsulating scsi synchronous bus operations into asynchronous message operations. Fiber channel standard (FCS, started in the late 80s) also defined something similar for I/O protocols.

we had wanted to 9333 to evolve into FCS capatible infrastructure
https://www.garlic.com/~lynn/95.html#13

but the 9333 stuff instead evolved into SSA.

ibm mainframe eventually adopted a form of FCS as FICON.

SCI, FCS, and 9333 ... were all looking at pairs of dual-simplex, unidirectional serial transmission using asynchronous message flows partially as latency compensation (not requiring end-to-end synchronous operation).

a few recent postings mentioning 9333/ssa:
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof

a few recent postings mentioning SCI
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home