List of Archived Posts

2005 Newsgroup Postings (08/14 - 08/31)

The Chinese MD5 attack
The Chinese MD5 attack
X509 digital certificate for offline solution
The Chinese MD5 attack
Robert Creasy, RIP
Code density and performance?
X509 digital certificate for offline solution
X509 digital certificate for offline solution
Non Power of 2 Cache Sizes
Need a HOW TO create a client certificate for partner access
Virtual memory and memory protection
ISA-independent programming language
30 Years and still counting
RFC 2616 change proposal to increase speed
dbdebunk 'Quote of Week' comment
open-ssl-stream hangs on post
ISA-independent programming language
Smart Cards?
Data communications over telegraph circuits
Data communications over telegraph circuits
Why? (Was: US Military Dead during Iraq War
help understand disk managment
help understand disk managment
Collins C-8401 computer?
is a computer like an airport?
auto reIPL
How good is TEA, REALLY?
Data communications over telegraph circuits
Penn Central RR computer system failure?
Penn Central RR computer system failure?
auto reIPL
Is symmetric key distribution equivalent to symmetric key generation?
What ever happened to Tandem and NonStop OS ?
Is symmetric key distribution equivalent to symmetric key generation?
Not enough parallelism in programming
Implementing schedulers in processor????
Penn Central RR computer system failure?
What ever happened to Tandem and NonStop OS ?
SHARE reflections
JES unification project
Certificate Authority of a secured P2P network
Certificate Authority of a secured P2P network
Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
What ever happened to Tandem and NonStop OS ?
Intel engineer discusses their dual-core design
Article: The True Value of Mainframe Security
Article: The True Value of Mainframe Security
Article: The True Value of Mainframe Security

The Chinese MD5 attack

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chinese MD5 attack
Newsgroups: sci.crypt,alt.law-enforcement,misc.legal,alt.usenet.kooks
Date: Sun, 14 Aug 2005 10:09:53 -0600
"Luc The Perverse" writes:
Personally I think the only way to use English language efficiently in a password is to have words randomly chosen from a dictionary, with randomly applied word perversion like changing e's to 3's t's to 7's etc and inserting punctuation in between

If you have any kind of uniform word perversion, and no one else knows your system, then I suppose a brute force machine would have no better luck with your password than just random characters. (Assuming you don't go around advertising I use "system X" to generate my passwords.) As long as system X remains something you don't share, as long as none of your secure passwords are compromised no one should catch on to the pattern.


a 4/1 password reference from over 20 years ago:
https://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
a little explanation
https://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.

passwords (and/or passphrases) tend to be some form of shared-secrets
https://www.garlic.com/~lynn/subintegrity.html#secrets

and because the secret is also exposed at the server-end ... most security guidelines require that there is a unique shared-secret for every unique security domain. (you don't want some temporary part-time help at the local garage ISP accessing your online bank account). this can lead to person having scores of hard to remember shared-secrets that potentially have to be changed monthly

so there have been various other proposals to address human factors issues, like using iterative hashing with (single/common) passphrase and server specific value ... as in RFC 2289, "A One-Time Password System":
https://www.garlic.com/~lynn/rfcidx7.htm#2289

the above mentions that it is resistant to passive attacker (and is somewhat related to the subject line since MD5 is one of the possible hashes).

note, however, a major motivation is to address the proliferation of unique passwords ... where unique passwords are supposedly countermeasure against security cross-domain contamination/exploits (aka the underlying infrastructure requiring unique authentication is trying to address an attack by somebody in one security domain against a different security domain).

so a perfectly reasonable attack (given that a major underlying characteristic/assumption is the whole issue of security cross-domain contamination/exploit) is where somebody in one security domain passively listens for salts transmitted by other security domains. then in a perfectly valid connection to their server, they accidentally spoof a salt from some other domain with a count of one ... followed by some transmission error indication and then restart the operation ... but with their own salt.

supposedly the whole point of needing uniquely different authentication for different security domains ... is the possibility of cross-domain contamination/exploits ... then if there is a claim about trying to solve a fundamental problem related to multiple-domain operation (the difficulty of a person remembering scores of different, unique hard to remember passwords) ... it shouldn't create new kinds of cross-domain attack vulnerabilities (regardless of whether they are active or passive attacks).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Chinese MD5 attack

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chinese MD5 attack
Newsgroups: sci.crypt,alt.law-enforcement,misc.legal,alt.usenet.kooks
Date: Sun, 14 Aug 2005 12:47:36 -0600
"Luc The Perverse" writes:
Well if you can't remember a different password for every account, then just have a small application running on your palm/that berry device/PSP and have it autogenerate entropy based passwords and store them in an encrypted file.

I admit I use the same password for all of my accounts which I don't really care about. (I mean if someone hacks my "myspace" account it's not that big of a loss.)

I used completely random characters of at least 12 (usually more) digits for any kind of account which has information like my address, phone number back accounts, etc.

I feel pretty good about this


there have been some number of pilots where debit-cards are using fingerprints in lieu of PIN ... and numerous people have pointed out how easy it is to spoof fingerprint systems.

the basic premise is that PIN (or biometrics) is countermeasure to lost/stolen card (aka two-factor authentication where the different factors have different vulnerability characteristics).

the issue is that some surveys claim that 30precent of the population write their pins on their debit cards.

then it isn't whether or not it is possible to spoof a fingerprint system ... the issue is whether it is easier for a crook to enter a pin that is written on a lost/stolen card ... or it is easier for a crook to lift (the correct) fingerprint from the lost/stolen card and enter it.

in any case, most security paradigms are designed for large segment of the population ... and you aren't going to find the segment of the population that write 4-digit PINs on their debit cards ... choosing/remembering 12 random characters passwords.

slight drift on exploits/vulnerabilities:
https://www.garlic.com/~lynn/aadsm20.htm#1 Keeping an eye on ATM fraud
https://www.garlic.com/~lynn/aadsm20.htm#23 Online ID Thieves Exploit Lax ATM Security

with respect to referenced phishing attacks, in the past, a (remembered) PIN represents two-factor authentication as a countermeasure to a lost/stolen card ... since they have different vulnerabilities (when the PIN isn't written on the card). an emerging problem with phishing attacks is that the PIN and account number (enabling the manufacture of a counterfeit card) can share common vulnerability. In that respect, fingerprint is still a lot more difficult to phish over the internet (than it is to phish either pins, passwords or account numbers).

however, actually using fingerprint authentication over the internet can represent more risk than using password authentication over the internet. an evesdropping compromise of password can be remediated by replacing the compromised password; it is still somewhat more difficult to replace a finger.

frequently, biometric authentication will be implemented as shared-secrets paradigm
https://www.garlic.com/~lynn/subintegrity.html#secrets

... and it may get difficult to come up with a unique biometric shared-secret for every unique security domain (as well as replacing a body part when its biometric value is compromised).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X509 digital certificate for offline solution

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: "" <lynn@garlic.com>
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Sun, 14 Aug 2005 14:09:03 -0700
Valery Pryamikov wrote:
It's a bit embarrassing for me to admit that until now I didn't even check the original question ;-). But I don't think it was question about business process applicability, but rather a sign of complete misconception. My understanding of original question is that op was asking about a way of protecting piece information that is used by some service (daemon) from everyone else using this computer, including administrator/root (because if it was only about protecting against unprivileged users of this computers -- simple access control would be more than enough).

Of course PKI is completely irrelevant here!... but any other encryption related technology is irrelevant here as well... Since service/daemon requires protected information in clear text, which means that decryption key must be accessible to that service on that computer, but that automatically makes this secret key to be accessible to administrator/root of this computer as well. The op's problem as it is, is more close to DRM than to anything else (i.e. store secret key, and cipher text in one place and hope that nobody will be able to put them together).


remember in my context, i described asymmetric cryptography as technology and public keys, digital signatures and PKIs as all business processes
https://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005n.html#43 X509 digital certificate for offline solution

... so a case study from another PKI scenario where the relying-party is offline and/or doesn't have realtime direct contact with the certification authority (which somewhat turns out to actually be the original design point for PKIs ... the offline situation where the relying-party doesn't have realtime, online and/or local resources for resolving information regarding first time communication with a stranger).

crica 1999, one of the major PKI certification authorities approached a large financial operation and convinced them that they needed to deploy a PKI infrastructure enabling their customers to do online, internet account transactions. This was a financial operation that had significantly in excess of 10 million accounts.

the scenario went:

1) the financial institution would register a customer public key for every account

2) the financial institution would transmit the resulting updated account database to the certification authority

3) the certification authority would munge and re-arrange the bits in each account record ... producing one digital certificate for each account record.

4) after a couple hrs, the certification authority would return all the recently produced digital certificates to the financial operation ... which would then store them in the appropriate account record and convey a copy to the appropriate customer

5) the customers would then generate digitally signed account transactions, package the account transaction, the digital signature and their copy of the digital certificate and transmit it over the internet to the financial operation.

6) the financial operation would pull the account number from the transaction, retrieve the corresponding account record, verify the digital signature using the public key in the account data base ... and NEVER have to make any reference to the digital certificate at all

the financial operation had spent nearly $50million on integrating a PKI infastructure when it dawned on somebody to do the complete financials.

they had already agreed that the certification authority would get $100/annum/account for the production of (redundant and superfluous) digital certificates that need NEVER actually be used.

doing the complete financials resulted in somebody realizing that the financial operation would be paying the certification authority $100m/annum per million accounts (or $1b/annum per 10 million accounts) for redundant and superfluous digital certificates that need NEVER actually be used ... aka certificate-less operation (other than the internet payload for continuously transmitting the digital certificates hither and yawn)
https://www.garlic.com/~lynn/subpubkey.html#certless

the financial operation eventually canceled the project and took the $50m hit.

this was actually a relying-party-only certificate scenario
https://www.garlic.com/~lynn/subpubkey.html#rpo

where the operational account contains all the information about the entity and the entity's public key (as well as copy of the entity's public key and a stale, static copy of a subset of the entity's operational information in the form of a digital certificate).

this is offline, from the standpoint of the relying-party not needing to contact the certification authority when processing a digitally signed transaction ... in part, because the relying party actually has all the real-time operational information as part of executing the transaction (and NEVER actually needs to reference the redundant and superfluous, stale, static digital certificate).

however, the certification auhtority was originally expecting to be paid $100m/million-accounts (well in excess of billion dollars) per annum for the redundant and superfluous, stale, static (and need NEVER be referenced) digital certificates.

now, a number of operations have used tamper-resistant hardware tokes (like USB dongles) as repository for protecting the confidentialty of private keys. This becomes truely a something you have operation ... since the hardware tokens perform operations using the embedded private key ... but the private key never exists outside of the confines of the token.

human operators and other agents can still compromise any system usage involving the private keys ... which is an overall system security integrity issue. however, the private keys are never divulged ... eliminating the system security confidentiality issue with regard to the private keys ... and crooks can't obtain the private keys and setup a counterfeit operation impersonating the original system (possibly unknown to the original operation).

this is my frequent refrain that most operations treat public key operation as a something you have authentication ... aka the party has accesws and use of the corresponding private key. When purely software implementation is used ... there typically are attempts to closely emulate real hardware token operation ... however software emulation of hardware tokens have several more threats, exploits and vulnerabilities compared to real hardware tokens.

one way of looking at this issue is where does the security perimeter lay. the security perimeter for a hardware token ... tends to be the physical space of the token. the security perimeter for a software emulated hardware token may be all the components of the computer where that software is running.

for financial environments ... like PIN-debit ... there are external tamper resistent hardware boxes (frequently referred to as HSMs) that do all the PINs processing for the financial institution. PIN are entered in the clear at POS terminals and ATMs ... but then immediately encoded. From then on, they never appear in the clear. the backend gets the transaction and sends it to the box .. and gets back some answers ... but standard operation never sees the PIN in the clear.

the (security) integrity of the backend systems might be compromised by insiders ... but you don't find insiders harvesting PINs from the backend systems (i.e. security confidentiality) and using them in impersonation attacks with counterfeit transactions.

part of this is that security integrity compromises tend to be a lot more difficult than security confidentiality compromises (copying the data). security integrity compromises also tend to leave a lot more traces back to the people responsible ... compared to security confidentiality compromises.

one of the claims that we've frequently made with respect to aads chip strawman
https://www.garlic.com/~lynn/x959.html#aads

for public key operation, was that being free from having to worry about the ins & outs of PKIs and digital certificates .... we were therefor able to concentrate on the fundamental threats and vulnerabilities of the actual operation of public key processes.

For instance, a very fundamental threat and vulnerability is the integrity and confidentiality of the private key. If i was looking at dying to spend $100/annum on stuff associated with public key operation ... I might consider it much better spent on hardware tokens than on digital certificates.

in fact in the security proportional to risk scenarios ... slightly related
https://www.garlic.com/~lynn/2001h.html#61

i.e. we've take it as a given that the integrity of the originating/remote environment is taken into account when evaluating a transaction for authorization. this includes the risk level associated with whether or not a real hardware token is being used and if a real hardware token is being used ... the evaluated, integrity of that token (which might change over time as technology changes). For a large percentage of the business processes in the world, we assert that the integrity level of the remote end is of more importance the a lot of personal information about the remote entity (which the vast majority of operations already have on file ... so it is of little interest to duplicate such information in digital certificates).

so another simple test ....

i would assert that the integrity level of the originating environment (software token or hardware token and the assurance level of such token) is one of the primary pieces of information that would be of interest to a relying-party ... right up there with what is the public key. so a real buisness oriented digital certificate would not only give the public key ... but also provide the integrity level of the environment protecting the private key.

when i examined x.509 fields several years ago ... i couldn't find one that provided the integrity level of the private key protection although some have simple flag that can be used to indicate whether it is software private key or hardware token private key. how many certification authorities have you heard of that have a process of checking whether they are certifying a software private key or a hardware token private key?

the literature has lots of stuff about the integrity level of public/private keys based on the number of bits in the key ... but I haven't seen anything on the integrity level of private key protection ... and/or writeups on applications that even make decisions based on whether they find they are dealing with a software private key or a hardware private key.

another indication was a couple years ago, i was giving a talk on the importance of the privatey key protection integrity level ... and somebody in the audience (from some gov. agency) said that if i would provide the full definition they would see that it was added to x.509v3 standard.

The Chinese MD5 attack

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chinese MD5 attack
Newsgroups: sci.crypt,alt.law-enforcement,misc.legal,alt.usenet.kooks
Date: Sun, 14 Aug 2005 16:03:33 -0600
"Luc The Perverse" writes:
It would be better if the authentication was outside of the computer. I mean if the biometric device in some way had firmware and ran a program to encrypt and pass on the authentication. Of course this would be cracked, so we would need a way to update and patch this as time went on.

here is a recent two-part posting ... the first part about pki & digital certificates, the second on integrity levels of authentication environments (specifically integrity levels related to private keys):
https://www.garlic.com/~lynn/2005o.html#2 X509 digital certificate for offline solutions

one scenario for the aads chip strawman
https://www.garlic.com/~lynn/x959.html#aads

is that PINs become secrets rather than shared-secrets in conjucntion with digital signature authentication.

a hardware token does on-chip key-gen, exports the public key and the private key is never divulged. the public key is registered at manufacturing time along with the integrity level of the chip. relying-parties that register the public key can obtain the corresponding chip integrity level from the manufactur (along with any changes if they are really interested) as well as operational characteristics of the chip ... i.e. is pin/password &/or biometrics required for chip operation.

nominally, a relying party verifying a digital signature assumes single factor something you have authentication ... aka the responsible entity has access to and use of the corresponding private key.

however, given the appropriate certification of the public key as to the characteristics of the operating environment of the corresponding private key ... the relying party may also be confident as to the integrity level of the chip protecting the private key (some confidence that it actually originated from that specific physical device) and whether or not the chip operation required a PIN or biometric to operation ... aka what is the confidence level of the something you have authentication as well as whether or not there was something you know and/or something you are authentication (in addtion) ... aka countermeasure for lost/stolen token exploit.

this is somewhat touched on in other threads
https://www.garlic.com/~lynn/aadsm20.htm#21 Qualified Certificate Request
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005m.html#53 Barcode Email
https://www.garlic.com/~lynn/2005m.html#54 Barcode Email

a couple years ago, I talked about AADS chip strawman at an assurance panel in the TPM track at IDF .... and commented that over the previous two years, TPM started looking more and more like AADS chip strawman.

note that EU FINREAD standard, a consumer market (home pc) smartcard reader oriented towards financial transactions has external display and pin-pad (or possibly biometric sensor)
https://www.garlic.com/~lynn/subintegrity.html#finread

minimizing the possibility that the pin/biometric can be skimmed by some trojan on the PC ... and the value of any transaction being digitally signed ... is reliably displayed. It can be used for session authentication (possibly challenge/response scenario), but it is also oriented towards transaction authentication (especially financial transaction authentication) ... where the FINREAD terminal has a miniture display to (reliably) show what is being digitally signed.

which then starts to stray into the digital signature dual-use hazard.

lots of challenge/response oriented session authentication would have digital signatures applied to what is assumed to be random data ... and the responsible entity never actually examines what is being signed.

digital signatures have also been defined for use authenticating transactions ... where there is some implication that what is being digitally signed ... is approved, authorized, and/or agreed to.

the digital signature dual-use attack involves the threat of a comprimized system sending valid transaction data disguised as random challenge/response data.

some past posts on digital signature dual-use vulnerability:
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#2 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#42 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#28 solving the wrong problem
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005l.html#20 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Robert Creasy, RIP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Robert Creasy, RIP
Date: Mon, 15 Aug 2005 10:35:05 -0600
Newsgroups: bit.listserv.vmesa-l
I recently got a note that Bob Creasy, one of the originators of virtual machines, died in an accident last week.

You can find references to his contributions, liberally sprinkled in Melinda's VM history document
http://www.leeandmelindavarian.com/Melinda#VMHist

... a few from above
In the beginning was CTSS, the "Compatible Time-Sharing System". CTSS was written by a small group of programmers at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts, under the leadership of Professor Fernando Corbato. One of the CTSS programmers was Robert Creasy, who was later to become the leader of the CP-40 project.

....
Rasmussen's response to all this was to decide that the Cambridge Scientific Center would write a time-sharing system for System/360. Meanwhile, inside Project MAC, Bob Creasy was upset by the inability of his colleagues to come to terms with IBM. He was impressed by the promise of machine upward compatibility offered by S/360, and he wanted Multics to be a mainstream system. When he heard that Rasmussen intended to build a time-sharing system based on S/360 and needed someone to lead the project, Creasy also left MIT to move to the Scientific Center. Inside IBM, losing the Project MAC bid was immediately recognized as a serious problem. A corporate task force was formed to get the company into a position to be able to win bids for time-sharing systems. The task force was composed of the most knowledgeable time-sharing people from around the company. CSC was represented by Rasmussen, Harmon, Creasy, and Comeau.

....
Creasy had, of course, spotted the most important aspect of the System/360 announcement, that programs written for one model of S/360 would run on any other model as long as they contained no timing-dependent code. From the System/360 "blue letter" (April 7, 1964): Whatever your customer's data handling requirements are now, whatever they will be in the foreseeable future, the System/360 can be custom-fitted to perform his job. In fact, this amazing new system makes possible, for the first time in the industry, a truly long-range growth plan for all customers. For it can accommodate virtually any combination of processing and computing functions. And can expand in easy, economical steps as the customer's needs change---with little or no reprogramming, no retraining of personnel, no disruption of service.

....
In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, "time-sharing" meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project.

....
What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work. Creasy and Comeau spent the last week of 1964 joyfully brainstorming the design of CP-40, a new kind of operating system, a system that would provide not only virtual memory, but also virtual machines.

....

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Mon, 15 Aug 2005 14:00:49 -0600
Seongbae Park writes:
For example, Sun's Performance Analyzer can produce a mapfile from performance measurement to optimize for code locality (and it's certainly not the only one that can do that sort of thing). This mapfile can be used in your build environment - it requires only a minimal impact in the build environment since the mapfile doesn't need to be generated every time you build.

note that vs/repack did that ... it was done at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in the early 70s ... and was used as part of the analysis of rewriting apl storage maangement (garbage collection) in the port from apl\360, small (16kbytes to 32kbytes) real storage swapping to cms\apl large virtual memory operation.

it was also used for analysis of some of the "large" 360 application in moving them from the os/360 real storage environment to os/vs2 virtual memory environment.

it was released as a product in the March of 1976.

i was a little annoyed ... i had written some of the data collection software for vs/repack ... but the real analysis work was done by hatfield. it was priced software and the science center was on the list of organizations ... that if employees produced priced software, they got the equivalent of one month license for all copies sold the first year.

in april '76, the science center was removed from the list of organizations, where employees got the first month's license.

i got to release the resource manager in may of '76 ... which was the first priced operating system code ... aka the unbundling announcement of 6/23/69 started pricing for application software, but kernel software was still free. the resource manager got chosen to be the guinea pig for priced kernel software (i got to spend a couple months with business people working out kernel pricing policty) ... lots of past posts on unbundling and resource manager being priced
https://www.garlic.com/~lynn/submain.html#unbundle

i even offered to give up my salary ... since the first year sales of the resource manager was so good that the first months license was well over $1m.

in any case article by Hatfield from the early 70s:
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X509 digital certificate for offline solution

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Tue, 16 Aug 2005 07:33:09 -0700
Valery Pryamikov wrote:
wow!

This is really interesting case study that clearly demonstrated problem of misplaced trust and dirty play of that "one of the major certification authorities"...

For all means, financial institution could have set up their own CA for that project... and of course blind signatures could pass better for relying-party-only scenarios... but I don't know about their options to license blind signatures at that time ( great that the patent expired now


ref:
https://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005n.html#43 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#2 X509 digital certificate for offline solution

but an extremely fundamental characteristic is that setting up a certification authority serves no useful purpose.

it wasn't a case of misplaced trust ... there was absolutely no requirement for trust propagation (represented by the existance of a digital certificate, certification authority, and PKI).

an extremely fundamental principle of PKIs and certification authority is a trust concept, involving trust propagation, where one party is certifying information for the benefit of other parties.

this was purely a digital signature authentication situation

it did not involve any requirement for the financial infrastructure to certify any information for the benefit for any other institution.

there is no other institution involved.

a certification authority is able to proove something ... typically by verifying the information with the authoritative agency for the information being certified. PKI and digital certificates encapsulate the information that is being certified ... creating a trust propagation information where the information can be trusted by other parties.

in this situation the financial infrastructure is the authoritative agency for the information ... they don't need anybody else ... or even themselves, creating a stale, static certification of some information that they already know.

i've repeatedly asserted that digital certificates are analogous to the "letters of credit" from the sailing ship days (a bank certifying some information for other parties). In this particular case, we would have a bank president writing a "letter of credit" addressed to themself ... say, giving the person's current account balance (at the time the letter was written). The next time the person comes into the bank to talk to the bank president, they have to present the "letter of credit" that the bank president had written to themself, listing the person's stale, static bank balance at the time the letter was written. The bank president then approves withdrawals against the account (w/o checking the current account balance) based on each individual withdrawal being less than the account balance listed in the letter of credit that the bank president wrote to themself possibly a year earlier.

....

all they needed was basic digital signature authentication. there was no requirement for the financial infrastructure to certify any information for the benefit of any other institution ... and therefor no requirement for a certificate authority generating digital certificates which represent the certification of some information by one institution for the benefit of some other party.

the purpose of a digital certificate is that one party certifies some information for the benefit of some other party.

the financial operation is the authoritative agency for the information that they are interested in ... every transaction has to reference the account record that is the master copy of all the information.

a stale, static digital certificate containing a small subset of information in the account record is redundant and superfluous. in fact, the digital certificate only contained a database lookup reference to the actual information ... which is also contained in the actual account transaction message, making any digital certificate not only redundant and superfluous but also serves no useful purpose.

the technology is asymmetric key cryptography, what one key (of a key-pair) encodes, the other key decodes (differentiating from symmetric key cryptography where the same key both encrypts and decrypts).

a business process is defined called public key ... where one key is identified as "public" and made freely available. the other key is identified as "private", kept confidential and never divulged.

a business process is defined called digital signature. a hash of the message is calculated and encoded with the private key. the recipient recalculates the hash, decodes the digital signature with the corresponding public key and compares the two hashes. if the two hashes are the same, then the recipient can assume:

1) that the message hasn't changed since the digital signature was originally calculated

2) something you have authentication, i.e. the origin entity had access to and use of the corresponding private key.

...

note that blind signatures allow for relying party to know something is true w/o actually needing to bind it to some entity. the issue for account-based transactions is that you actually want to authenticate that the originating entity is an entity authorized to execute transactions against the account.

blind signatures show up in situations like anonymous, electronic cash. an anonymous person spends some electronic cash ... the relying party wants to know that the electronic cash is valid w/o needing to establish anything about the spending entity.

three factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you havesomething you knowsomething you are

for account-based authentication, it doesn't need to establish who you are ... however, it does need to establish that the entity is an entity authorized to perform specific account operations.

blind signature schemes eliminate binding to an entity and can be used where authentication isn't required ... simply validity/truth of the message.

account-based operations requires (authentication) binding of the entity to the account ... but can be privacy agnostic ... as in the case of some of offshore annonymous bank acount ... use of the bank account is pure authentication w/o any identification. there may or not be identification to open the account, but only authentication is needed for using the account (not identification). this is one of the issues raised in the x9.59 financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#privacy

some blind signature references:
http://citeseer.ist.psu.edu/pointcheval96provably.html
https://en.wikipedia.org/wiki/Blind_signature
http://www.tiscali.co.uk/reference/dictionaries/computers/data/m0051357.html
http://www.rsasecurity.com/rsalabs/node.asp?id=2339
http://www.emergentchaos.com/archives/001511.html

X509 digital certificate for offline solution

Refed: **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Tue, 16 Aug 2005 11:59:01 -0700
lynn@garlic.com wrote:
an extremely fundamental principle of PKIs and certification authority is a trust concept, involving trust propagation, where one party is certifying information for the benefit of other parties.

a certification authority is able to proove something ... typically by verifying the information with the authoritative agency for the information being certified. PKI and digital certificates encapsulate the information that is being certified ... creating a trust propagation information where the information can be trusted by other parties.

in this situation the financial infrastructure is the authoritative agency for the information ... they don't need anybody else ... or even themselves, creating a stale, static certification of some information that they already know.


ref:
https://www.garlic.com/~lynn/2005o.html#6 X509 digital certificate for offline solution

another fundamental principle of PKIs, certification authorities, and digital certificates ... is not only is it targeted at trust propagation ... where the certification authority is certifying some piece of information for use by other parties ... but the purpose of the digital certificate is to represent the certified information to the relying party ... when the relying party

1) has no other means of establishing the validaty of the information themselves (i.e. the relying party doesn't actually maintain their own information).

2) is not able to directly contact the authoritative agency responsible for the information (can't validate the information themselves and therefor not needing to rely on a certification authority to validate the information).

3) is not able to directly contact the certification authority responsible for certifying the information.

aka ... the digital certificate provides certified information trust propagation when the relying party doesn't have the information themselves and/or has no way of directly accessing the information (aka a trust propagation scenario).

repeating the bank president scenario ... and the business process analogy between digital certificates and letters of credit from the sailing ship days ... the bank president writes the letter of credit with their right hand ... and then proceeds to transfer the letter of credit from the right hand to their left hand .... because obviously there is no other way for a bank president's left hand to know what the bank president's right hand is doing.

Non Power of 2 Cache Sizes

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Non Power of 2 Cache Sizes
Newsgroups: comp.arch
Date: Wed, 17 Aug 2005 09:08:02 -0600
"John Mashey" writes:
Sigh.

1) *No one* builds a die with 4MB of cache and only uses 3MB. Any designer who proposed that would be fired. There are different ways to do this, of which some have been used since the mid-1990s or earlier in microprocessors, but they involve adding a just a modest amount of extra silicon to the cache to improve yield.


walk down memory lane ...

big change from 370/168-1 to 370/168-3 was to double the cache size from 32kbytes to 64kbytes ... however they used the 2kbit to index the additional entries.

the problem was that worked when the software was running 4k virtual pages ... but if the software was running with 2k virtual pages, it would only use 32kbytes.

also, if you had software that switched betweek 2k virtual pages and 4k virtual pages ... the cache was flushed at the switch.

the dos/vs and vs1 operating systems ran 2k virtual pages, mvs ran 4k virtual pages. vm for "real" address virtual machines ran 4k virtual pages by default ... but if they were running guest operating systems that used virtual memory ... the shadow pages were whatever the guest was using.

there was some large customer running vs1 production work under vm on 168-1 and installed the 168-3 upgrade (32k->64k cache) expecting performance to get better ... but things got much worse. first, since vs1 was running 2k virtual pages ... all of the vm shadow tables for vs1 were 2k virtual pages ... which met the cache only worked with 32kbytes. the other was that vm supervisor reloaded 4k virtual page mode by default ... so there was lots of cache flush overhead constantly switching back and forth betweek 2k and 4k virtual page moades.

the 370 was purely 24bit/16mbyte virtual address spaces. the 168s used the 8mbyte bit in TLB entry selection. MVS (& svs) had 16mbyte space mapped out so that there was an 8mbyte MVS kernel image in each of its virtual address spaces. That left only 8mbytes (in each virtual address space) for running application (although as system functions grew ... there were some environments where only 3mbytes in each virtual address space left for application execution).

the 168 had a 128 entry TLB with seven entry "STO" stack (segment table origin, basically uniquely identified each virtual address sapce).

For basic MVS, half the TLB entries went to the kernel image pages and half the TLB entries went to application pages.

however, a lot of virtual address space evivronments running under VM tended to start virtual address space use at zero and grow upwards, a large number of applications rarely ever exceeded a couple mbytes ... so frequently at least half the TLB entries (for virtual address >8mbytes) went unused.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Need a HOW TO create a client certificate for partner access

Refed: **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.windows.server.security,microsoft.public.sharepoint.portalserver
Subject: Re: Need a HOW TO create a client certificate for partner access
Date: Wed, 17 Aug 2005 15:24:24 -0700
Serge Ayotte wrote:
Hello to all... If the following is part of a white paper or other 'net resource, please direct me to it... After many trial on Google, I can't seem to hit the right word combination :) Never having touch (for not needing it) certificate/pki, I am a bit lost now...

I need to secure a SharePoint Portal Server site that will be access from the "outside" of the network (this SPS is running on a member server of a SBS2K3 domain). I already have SSL and permit only that, but I was asked to look into a "stronger" method of making sure that the people accessing it are allowed. Aside from user/password I immediately thought that a client certificate would be the way to go, but now I am a bit lost in all the information I got from MS and Technet about client certificate.

To give a certificate to someone to import on his computer, do I HAVE to have him connect to a Certificate server site for that? Isn't there a way for me to be able to generate the certificate and send it to the "outside partner" I want to give access to the site?

If I am correct, I could then associate the certificate to a login account, so in a way having a double security level (i.e. username/password not enough to access, or only the certificate not enough also).

Thank you in advance for any and all hints, tips, trick and direction you will provide... Very much appreciated in advance!


from an administrative standpoint ... get a server that supports RADIUS authentication .... it is probably the most pervasive authentication methodology on the internet today ... being extensively deployed by ISP and large number of other organizations. For instance, if you have ever setup a computer for PPP/dial-in access to an ISP ... typically there has been a screen where you select one of 3-4 different authentication mechanisms ... this is typically then what your ISP or corporate datacenter has pre-specified for your particular account in a RADIUS infrastructure.

In addition to authentication, RADIUS also provides additional optional capability for supporting authorization, permissions, and accounting on an account by account basis.

RADIUS supports a number of different authentication paradigms ... having originally started with userid/password ... but there are versions that have been extended with other types of authentication methodologies ... where you can actually select the authentication mechanism on a account by account basis (or userid by userid).

One authentication mechanism is recording public keys in lieu of passwords and doing digital signature verification
https://www.garlic.com/~lynn/subpubkey.html#radius

this is using the registration of public keys, on file in the radius infrastructure for performing digital signature verification w/o requiring PKIs, certification authorities, and/or digital certificates.
https://www.garlic.com/~lynn/subpubkey.html#certless

the basic technology is asymmmetric key cryptography ... where what one key (of a key-pair) encodes, the other key (of the key-pair) decodes. This is in contrast to symmetric key cryptography where the same key is used for both encryption and decryption.

a business process is defined called public key, where one of the asymmetric key pair is identified/labeled "public" and freely disclosed. The other of the key pair is identified/labeled "private" is kept confidential and never disclosed.

a business process is defined called digital signature. a hash of a message or document is calculated and encoded using the private key, yielding the digital signature. the message is combined with the digital signature and transmitted. the recipient recalculates the hash on the message, decodes the digital signature with the corresponding public key and compares the two hashes. if the two hashes are equal, then the recipient can assume that

1) the message hasn't be modified in transit 2) something you have authentication, aka the sender has access to and use of the corresponding private key.

this is slightly modified for pure authentication ... using a challenge/response protocol. The server sends the client some random data as a challenge (as countermeasure to replay attacks). The client calculates the digital signature for the challenge and returns just the digital signature (since the server has the challenge). The server calculates the challenge hash, decodes the client's digital signature that was returned and compares the two hashes.

there are various kinds attacks that a server and/or imposter may mount on a client. as countermeasure for some of these attacks ... the client actually adds some of their own random data to the challenge before calculating the digital signature. the client then returns both their added data and their digital signature to the server. the server now has to calculate the hash against a combination of the original challenge and the added data provided by the client.

At its basic there is no actual need to generate a client digital certificate and/or require a PKI and/or certification authority. The basic requirement for a certification authority is to certify the validaty of some information (represented by the contents of a digital certificate) for the benefit of other parties which have no means of otherwise obtaining information about the party they are dealing with. This is the first time message/communication received from a total stranger scenario.

Fundamentally all that is needed is for the client to 1) generate a public/private key pair 2) be able to register public key with some server infrastructure 3) be able to generate digital signature with their private key

and for a little drift, one of the possible digital signature attacks involves dual-use vulnerability involving digital signatures. there are many instances where digital signatures are used for pure authentication ... where the digital signature is applied to purely random data ... that is never actually examined by the signing human.

however, there are also infrastructures where real messages and/or documents are digitally signed, carrying with it a connotation similar to that of a human signature, aka that the person has read, understood, agrees, approves, and/or authorizes what has been digitally signed. so one possible vulnerability is for an attacker to transmit to a client a valid contract or financial transaction, under the ruse of random challenge data. The client then automatically digitally signs the "random challenge data" w/o ever examining what is being digitally signed.

misc. past dual-use digital signature vulnerability postings:
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#2 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#42 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#28 solving the wrong problem
https://www.garlic.com/~lynn/2004h.html#51 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004h.html#58 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack

Virtual memory and memory protection

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual memory and memory protection
Newsgroups: comp.arch.embedded,comp.arch
Date: Fri, 19 Aug 2005 09:09:17 -0600
"ssubbarayan" writes:
I went through one good tutorial on virtual memory in this website:
http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/Memory/virtual.html

The article says that virtual memory can help in protecting memory of the applications process.That means each individual process running in the CPUs memory can be protected. I am not able to realize how this could help in protection?AFAIK,MMUs are the ones which help in memory protection and that too MMUS are implemented in Hardware where you could set some registers to prevent the processes from crossing boundaries and hence they can protect it. First of all is my understanding regarding MMUs correct>? I am not able to understand relation between Virtual memory and memory protection.Can some one throw some light on this?

I am not from computer science back ground so these things are new to me.So pardon my ignorance.


class assignment?

standard 360s (in the 60s) were real memory (no virtual memory) and offered store protect & fetch protect feature.

they also offered supervisor state and problem state ... supervisor state allowed execution of all instructions (including state change control and memory ptotect specification). problem state only allowed execution of a subset of instructions.

store protect feature typically prevented applications from storing into areas that they weren't allowed to ... like kernel code and/or other applications.

fetch protect prevented applications from even looking/fetching regions they weren't suppose to (fetch protect also implied store protect).

360 model 67 also offered virtual memory support, it was used to separate things into totally different domains. not only could applications be prevented from fetching or storing data into areas they weren't suppose to (outside of their virtual address space) ... but they possibly weren't even aware that such areas existed. this was also used to implement virtual machines. recent reference to some early virtual machine work in the 60s:
https://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP

some number of virtual address space architectures also support r/o segment protection ... i.e. virtual address space ranges that an application can't store into. the original 370 virtual memory hardware was supposed to include r/o segment protect. vm370 was going to use this to protect common instructions and data areas that could be shared across multiple different virtual address spaces (only needing one physical copy of the virtual pages shared across large number of different application address spaces but still protected). when r/o segment protect was dropped before product announce ... vm370 had to fall back to a hack using (360) store protect ... to protect shared pages. the original relational/sql database implementation was done on vm370 and made use of various kinds of shared segments
https://www.garlic.com/~lynn/submain.html#systemr

a no-execute feature has been added to some hardware recently. this is attempting to address the massive number of buffer overruns vulnerabilities typically associated with c-language implemented applications. attackers have malicuous executable code introduced via various kinds of data transfers and then contrive to have execution transferred to the malicuous code. system facilities use hardware features to mark pure data areas regions as no-execute. Areas of (virtual address space) memory that are marked non-executable can be fetched &/or stored ... but the hardware instruction fetch will interrupt if instructions are fetched from such a area.

some past postings mentioning no-execute
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#32 8086 memory space [was: The Soul of Barb's New Machine]
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005b.html#39 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#66 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#44 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#53 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#54 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#55 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#65 [Lit.] Buffer overruns

some postings mentioning 360 store/fetch protect feature:
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
https://www.garlic.com/~lynn/2003j.html#27 A Dark Day
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#33 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#47 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005h.html#9 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#36 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#37 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005l.html#27 How does this make you feel?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ISA-independent programming language

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISA-independent programming language
Newsgroups: comp.arch
Date: Fri, 19 Aug 2005 14:22:31 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Eh? I have just checked up to remind myself, and ISO Pascal was only STARTED about 1977 and wasn't delivered until 1982. By then, it was essentially just an IBM PC language and the control of the language had been, er, taken over by Borland. Outside that community, it was more-or-less defunct.

in the 70s, one of the vlsi tools group did a pascal for the mainframe, which was then used to develop some number of internal vlsi design tools. it was also released (in late 70s) as a mainframe product (vs/pascal)and later ported to the rs/6000 (big motivation was ths internal tools). one of the (external) product guys did sit on the iso committee.

in the early/mid 90s ... some number of the vlsi tools were spun off to outside companies (that specialized in vlsi design tools) .. and a lot of these tools had to be ported to other (primarily workstation) platforms.

the mainframe/rs6000 pascal had quite a bit of work done on it supporting the major (vlsi design tool) production applications.

trying to get production applications that were possibly 50k-60k lines of pascal to some of these other platforms ... was a major undertaking; especially since these pascals appear to have never been used for anything other than student jobs (and in at least one case, the organization had outsourced its pascal support, the organization was maybe 30 minute drive, depending on traffic conditions ... but the actual support organization was 12 time zones away).

minor past references:
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

30 Years and still counting

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 30 Years and still counting
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 19 Aug 2005 14:33:58 -0600
steve@ibm-main.lst (Steve Comstock) writes:
I was hoping to host a little party or hospitality event at SHARE, but I am not able to make it this time (I'm planning to be in Seattle in March, however). So I thought I'd send a short post to everyone, thanking them for the ride.

the following is part of a presentation i gave as an undergraduate at the fall68 share meeting in Atlantic City (getting close to 40 years ago).
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

I had started with MFT11 sysgen ... taking the stage2 sysgen apart and breaking it up and re-arranging it so that 1) it could run in the production jobstream (instead of requiring stand alone time for starter system) and 2) carefully arraigning files and pds members to optimize disk arm ... getting about three times speed up for typical univ. job streams.

in jan. 68 ... three people from the science center had come out and installed cp/67. in the spring and summer of 68 i rewrote significant portions of cp/67 kernel reducing cp/67 kernel pathlength time (for the above referenced jobstream) from 534 cpu seconds. to 113 cpu seconds.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RFC 2616 change proposal to increase speed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RFC 2616 change proposal to increase speed
Newsgroups: comp.protocols.tcp-ip
Date: Sat, 20 Aug 2005 08:11:05 -0600
dan_delspam writes:
Therefore, my strong suggestion is to increase this value from 2 to 4 like it was already the case vefore in HTTP 1.0. We could also put more than 4, sepecially if we define a 20ms delay between each new connection (to avoid server's load peaks). We could then go up to easely 10 simultaneous connections, and that would nearly suppress all delays due to the response time, and therefore improve a lot the internet's usability.

the issue isn't so much increase speed ... but reduce latency (response time).

For quite a while, I've been using tab browser to handle differently. i've got a tab bookmark folder with over 100 URLs. I click on the folder ... and it starts background fetch of all URLs into different abs.

Get a cup of coffee and then i can "cntrl-pageup" & "cntrl-pagedown" cycle around the tabs in real time (no latency). I can click on individuals URL and have them brought into background tab (latency is being masked while i look at something else).

current issue is somewhere around 250-300 tabs ... the browser starts to get somewhat sluggish, clicking on a new URL into a background tab ... locks up doing other things in the browser ... even simple scroll up/down in the current/active tab.

implications is that there are some browser data structures that are simple linear lists.

this is analogous, but different to early webserver problems.

tcp was envisioned as being long running sessions. http somewhat cribbed tcp for simple datagram support. minimum packet exchange for tcp is seven packets with long dangling finwait when the session was closed. finwait lists were linear searches. http use of tcp was resulting in thousands on the finwait list at webservers. the big webservers were seeing 95% of total cpu use being spent on running finwait list.

netscape servers were having thruput problems ... so they were being replicated ... at one point there were something like netscape1 thru possibly netscape30 ... where users were suggested to type in different hostname to try and load balance.

as an aside ... i believe it was google (or yahoo?) that did the first front-end router load-balancing of backend servers ... the google boundary routers were modified to load balance incoming tcp setup requests across the backend servers.

at one point, netscape replaced a whole pool of webservers with a large sequent server. the claim was that possibly sequent had been the first to rewrite the tcp finwait list handling ... the story was that they had been selling into the commerical unix market and had customers with 20,000 concurrent telnet (tcp) sessions into a single server. With that many long-running telnet sessions .. there had started having some significant finwait list activity (http high finwait list activity isn't from an extremely large number of concurrent long-running session ... but a high rate of datagram-like, short-lived requests abusing tcp).

random data point ... vmtp (rfc1045) does reliable transaction protocol in minimum of 5 packet exchange ... and xtp
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

does reliable transaction protocol in minimum of 3 packet exchange

misc. past posts on tab-browsing
https://www.garlic.com/~lynn/2004e.html#54 Is there a way to configure your web browser to use multiple
https://www.garlic.com/~lynn/2004g.html#6 Adding Certificates
https://www.garlic.com/~lynn/2004h.html#25 Why are programs so large?
https://www.garlic.com/~lynn/2004h.html#51 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#16 New Method for Authenticated Public Key Exchange without Digital Ceritificates
https://www.garlic.com/~lynn/2005c.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005n.html#8 big endian vs. little endian, why?
https://www.garlic.com/~lynn/2005n.html#15 1.8b2 / 1.7.11 tab performance
https://www.garlic.com/~lynn/2005n.html#41 Moz 1.8 performance dramatically improved

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

dbdebunk 'Quote of Week' comment

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: dbdebunk 'Quote of Week' comment
Newsgroups: comp.databases.theory
Date: Sun, 21 Aug 2005 08:10:11 -0600
"David Cressey" writes:
Hmmm. I started out with an IBM 7090 and a PDP-1 (That's a one, not an eleven), From there moving on to PDP-6 and 10, DECsystem-10 and 20, and VAX. SQL came much later for me, after Pascal, Datatrieve, and VAX Rdb/VMS.

recent posting related to 50th anniv share meeting in boston starting today & having given presentation as undergraduate at the fall68 Atlantic City share meeting
https://www.garlic.com/~lynn/2005o.html#12

part of fall68 presentation
https://www.garlic.com/~lynn/94.html#18

share web page
http://www.share.org/

recent posting mentioning platform original relational/sql work was done
https://www.garlic.com/~lynn/2005o.html#10

other postings about original relational/sql work
https://www.garlic.com/~lynn/submain.html#systemr

and a recent thread that had some pascal discussion
https://www.garlic.com/~lynn/2005o.html#11

slightly older thread discussing vax/vms
https://www.garlic.com/~lynn/2005n.html#10

and even older posting giving number of vax shipments, sliced and diced by model, year, us, non-us
https://www.garlic.com/~lynn/2002f.html#0

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

open-ssl-stream hangs on post

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: open-ssl-stream hangs on post
Newsgroups: gnu.emacs.gnus
Date: Sun, 21 Aug 2005 08:26:18 -0600
i'm running ngnus 0.3 on fedora FC3 with open-ssl-stream. openssl is 0.9.7a

reading news works fine w/o problem

when i go to post ... the posting process hangs, hanging emacs (same symptoms with ognus 0.24). initial reaction is to kill emacs and start over.

however, i also discovered i can kill openssl process and recover.

i find the openssl process and kill it, gnus appears to do a little processing (nothing displays, emacs remains hung), restarts openssl process and then stops again. i find the openssl process and kill it again. this time gnus finally comes back and says that the posting failed.

however, the posting actually did work. problem appears to be some interaction between gnus and openssl regarding whether something actually happened or not.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ISA-independent programming language

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISA-independent programming language
Newsgroups: comp.arch
Date: Sun, 21 Aug 2005 08:42:00 -0600
re:
https://www.garlic.com/~lynn/2005o.html#11 ISA-independent programming language

the mainframe tcp/ip product was also implemented in vs/pascal.

there were some issues with its thruput ... getting 44kbytes/sec consuming nearly full 3090 cpu.

i did the product modifications for rfc 1044 support and in some testing at cray research between a cray and a 4341-clone ... it was peaking at the channel interface speed using only modest amount cpu (about 1mbyte/sec using maybe 200-300kips).

past rfc 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

that testing trip sticks in my mind ... we were on nw flight sitting on the runway at sfo taking off nearly 20 minutes late (for minneapolis). part way thru the flight, i noticed a number of people congregated in the back of the plane. i went back to find out what all the interest was about ... and it turns out they were discussing the big quake that hit very shortly after we left the ground.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Smart Cards?

Refed: **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.security
Subject: Re: Smart Cards?
Date: Sun, 21 Aug 2005 08:31:14 -0700
Steven L Umbach wrote:
They can be for any domain user. They are a way of using a much more secure authentication method to logon to a computer that requires a smart card and pin number. A smart card uses PKI for authentication and the users smart card contains the users private key. If you configure a user account to require smart card for logon then there is no way for someone else to logon as that user unless they have the users smart card in hand and know the pin number barring a compromise of the domain, local computer, or PKI such as an unauthorized or malicious user being or becoming an enrollment agent.

PKI uses what is called a public/private key pair. The public key can be widely distributed and the private key is considered sensitive and be protected. A private key can decrypt what a public key encrypts and can be used as an authentication challenge. Theoretically the public/private key are unique in that only the one private key can decrypt what the matching public key can encrypt. The public key is commonly referred to as the certificate.


the technology is asymmetric key cryptography ... what one key (of a key-pair) encodes, the other key decodes ... as opposed to symmetric key cryptography which uses the same key for encryption and decryption.

there is business process called public key ... where one of the keys (of the key-pair) is labeled as public and freely distributed. the other key is labeled private, kept confidential and never disclosed.

there is a business process called digital signature ... where the "signer" computes the hash of some message/document and encodes the hash with their private key. they then transmit the message/document and the digital signature. the recipient recomputes the hash on the message/document, decodes the digital signature with the appropriate public key (producing the original hash) and compares the two hashes. if the two hashes are the same, the recipient can assume:

1) the message/document hasn't changed since originally signed

2) something you have authentication ... i.e. the sender has access to and use of the corresponding private key.

from 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have (i.e. something kept unique)
something you know (i.e. pin/password)
something you are (i.e. biometrics)


recent posting about threats/vulnerability characteristics of multi-factor authentication
https://www.garlic.com/~lynn/aadsm20.htm#1 Keeping an eye on ATM fraud
https://www.garlic.com/~lynn/aadsm20.htm#23 Online ID Theives Exploit Lax ATM Security
https://www.garlic.com/~lynn/2005o.html#1 The Chinese MD5 attack

where pin/password (something you know) is frequently used as countermeasure to lost/stolen object (something you have). one basic concept of multi-factor authentication ... is the different factors/mechanisms not having common vulnerabilities.

widely deployed infrastructures for authentication are kerberos (deployed as basic authentication mechanism in many platforms, including windows)
https://www.garlic.com/~lynn/subpubkey.html#kerberos
and radius
https://www.garlic.com/~lynn/subpubkey.html#radius

both having started out with userid/password type operation ... where some sort of shared-secret
https://www.garlic.com/~lynn/subintegrity.html#secret

is registed for authentication. both infrastructures have had upgrades where public key is registered in lieu of shared-secret and digital signature authentication occurs in lieu of comparing for password match. public key having the advantage that it can only be used for authentication, not impersonation ... where the same shared-secret can be used for both authentication as well as impersonation.
https://www.garlic.com/~lynn/subpubkey.html#certless

the original internet pk-init draft for kerberos specified simple public key registration for authentication. it was later extended to also include the option of using digital certificates.

PKI, certification authorities, and digital certificates are business processes for addressing the situation involving first time communication with a stranger. this is the letters of credit model from the sailing ship days and was targeted at the offline email scenario from the early 80s where somebody dials their local (electronic) post office, exchanges email and then hangs up. they are now potentially faced with first time email from a stranger ... and they have no local repository or any other means of accessing information about the stranger.

another issue is that public keys operations are implemented using both software containers for the something you have private key as well as hardware tokens. Hardware tokens of various kinds (including smartcards) are commoningly accepted as having higher integrity level and trust vis-a-vis software container.

a couple recent postings on subject of integrity levels (and possibility of the authenticating party needing to determine the security/integrity level proportional to their risk):
https://www.garlic.com/~lynn/2005o.html#2 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack

some recent postings about business process characteristics of certification authorities, digital certificaets, and PKIs.
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#13 ID "theft" -- so what?
https://www.garlic.com/~lynn/aadsm20.htm#15 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#17 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#21 Qualified Certificate Request
https://www.garlic.com/~lynn/aadsm20.htm#26 [Clips] Does Phil Zimmermann need a clue on VoIP?
https://www.garlic.com/~lynn/aadsm20.htm#28 solving the wrong problem
https://www.garlic.com/~lynn/aadsm20.htm#30 How much for a DoD X.509 certificate?
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm20.htm#32 How many wrongs do you need to make a right?
https://www.garlic.com/~lynn/aadsm20.htm#33 How many wrongs do you need to make a right?
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005n.html#9 Which certification authority to use
https://www.garlic.com/~lynn/2005n.html#39 Uploading to Asimov
https://www.garlic.com/~lynn/2005n.html#43 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005n.html#49 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#2 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#6 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#7 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#9 Need a HOW TO create a client certificate for partner access

Data communications over telegraph circuits

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Sun, 21 Aug 2005 18:48:26 -0600
Morten Reistad writes:
real world example :

I am currently involved in telephony; and the databases for subscriptions etc. Telephones have a "high-nine" reliability requirement; but they are reasonably simple, and the tables to support them are few and simple too. Interfaces are pretty standard, and good full-systems failover solutions exist. You can therefore get to 4-5 nines using a pretty straightforward path.

That is, until you consider the updates and update consistensies to databases. They have interlocking relationships that make duplication very difficult.

But, CHANGING a phone subscription is not a five-nine task. OPERATING IT is. Therefore move changes out, and make the transaction log handling a separate subsystem; and updating the production facilities a separate task with a lot lower resiliency than the phones, similar to the "web stuff" that take subsctiptions. Then the phones work even if the database master is blown to smithereens, or the web site is down.


about the samme time we were looking at the previous example
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circui

we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and we spent some time with a telco organization that had a five-nines operation. they were currently had a system with triple redundant hardware connected to a SS7 with replicated T1 links.

the problem was that any (scheduled) software system maint. for the box consumed close to ten years worth of allowed down time.

replicated high availability system addressed the system down time issue related to doing software maint. ... and while the individual hardware boxes might not quite have five-nines ... the SS7 logic already would redrive request on the 2nd T1 if it didn't get an answer on the first T1. The SS7 redrive logic would mask any individual outage ... and while an individual backend box might not be quite five-nines ... the overall infrastructure (becuase of the SS7 redrive logic) would be much better than five-nines.

the solution by the replicated hardware group ... was to install a pair of replicated hardware boxes ... in order to meet the overall system five-nines objective.

however, once resorting to a pair of backend boxes to handle the overall system outage issue ... then it was no longer necessary to have the individual backend boxes be done with replicated hardware ... aka a high availability backend configuration (relying on the SS7 fault masking redrive) made it redundant and superfluous for having redundant hardware for the individual backend computers.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Mon, 22 Aug 2005 08:00:50 -0600
Morten Reistad writes:
Combat systems have the inverse priority BTW. Keep operating at all costs. An F16 does not have a stable fall-back flying mode; it must have the computer running. Probably a better example for your case.

and of course the person responsible for much of the f16 design
https://www.garlic.com/~lynn/subboyd.html#boyd2
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why? (Was: US Military Dead during Iraq War

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why? (Was: US Military Dead during Iraq War
Newsgroups: alt.folklore.computers
Date: Mon, 22 Aug 2005 09:09:32 -0600
blmblm writes:
Sing it. The argument against this sort of centralized scheme was (if I remember right) that it didn't give users enough control over their own environments, but I rarely found that to be a problem in the days when my office "computer" was a terminal connected to a good multiuser shared system. Install software? most Unix stuff installs fine in a user-owned directory, no admin privileges needed. Maybe it was similar with TOPS-10?

the argument i frequently heard was that the support costs supposedly disappeared from the budget ... since everybody was mostly doing their own support ... so it didn't actually show up in a budget that needed management, reporting, defending, etc.

other business units constantly complaining about how much money was going to the datacenter ... just eliminate the datacenter and let each individual business unit eat their own costs.

there was the issue that if you have a computer supporting thousands of users ... you tended to do a lot more testing and change control ... in case things went wrong. people could willy, nilly do things to their own computer and (the common opinion at the time), it just affected the individual.

personal responsibility and empowerment.

misc. time-sharing posts
https://www.garlic.com/~lynn/submain.html#timeshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

help understand disk managment

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: help understand disk managment
Newsgroups: linux.redhat.misc
Date: Mon, 22 Aug 2005 09:45:07 -0600
"peter" writes:
Thanks, now I know why I was confused. There are two types of file systems in redhat.

In the simple type, hard drives are partitioned and partitions are mounted.

In the complicated type (LVM), drives are divided into physical volume, then physical volumes are combineded into a logical volume group, which is like a big resizable virtual drive. This logical volume group can then be "partitioned" into logical volumes which can then be mounted and resized at will.

Is this more or less correct? What is the correct way to refer to the simple type file system?


disk partitioning, ala pc/dos ... sort of like minidisks from cp/67 ... circa mid-60s

minor ref
https://www.garlic.com/~lynn/2004e.html#38 [REALLY OT!] Overuse of symbolic constants
https://www.garlic.com/~lynn/2004h.html#40 Which Monitor Would You Pick??????
https://www.garlic.com/~lynn/2004k.html#46 Xah Lee's Unixism

LVM started with aix/v3 for rs/6000 included support for things other than straight partitioning, striping, mirroring, concatenation, etc

as an aside ... journaled file system (JFS) was also done for for the same rs/6000 aix/v3 release

....

... so one flavor is sort of the pc/dos heritage ... modulo minidisks and virtual machine stuff from the mid-60.

the other flavor is somewhat more unix heritage ... at least from aix/v3.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

help understand disk managment

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: help understand disk managment
Newsgroups: linux.redhat.misc
Date: Mon, 22 Aug 2005 12:14:42 -0600
"peter" writes:
Another reason I was confused is caused by the name "logical volume group". The name implies it is a group of logical volumes when in fact it is a group of physical volumes. I suggest we rename it "physical volume group".

one of the reason for the original LVM label ... were options other than 1:1 mapping between the logical volume and the physical partition(s) ... aka mirroing, stripping, concatenation.

the ms/pc dos stuff purely did physical partitioning with a 1:1 mapping between the (virtual) disk (for filesystem) and the physical partition.

when we did ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

... another minor reference
https://www.garlic.com/~lynn/95.html#13

we used lvm mirroring for no-single-point-of-failure ... and JFS for fast restart after outage.

there were even gimmicks played with 3-way mirroring for backup ... add a (LVM) 3rd mirror on the fly ... wait until it sync'ed ... and then took one of the mirrored disks offline and sent offsite for disaster/recovery.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Collins C-8401 computer?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Collins C-8401 computer?
Newsgroups: alt.folklore.computers
Date: Mon, 22 Aug 2005 13:49:18 -0600
hancock4 writes:
I don't think GE was considered part of the "BUNCH" (Burroughs, Univac, Control Data, and Honeywell).

GE was one of the seven dwarfs ... and then sold-off computer stuff to honeywell ... part of reduction from seven to five.

ref. from search engine
http://www.answers.com/topic/honeywell

.. from the above
In the mid-1960s, Honeywell's 200 series gave IBM serious competition. It outperformed IBM's very successful 1401 computer, which it emulated, causing IBM to accelerate its introduction of its System/360. In 1966, Honeywell acquired Computer Control Company's minicomputer line, and in 1970, it acquired the assets of GE's computer business. The computer division was renamed Honeywell Information Systems, Inc.

....

misc. past dwarfs postings
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003b.html#61 difference between itanium and alpha
https://www.garlic.com/~lynn/2003.html#36 mainframe
https://www.garlic.com/~lynn/2003.html#71 Card Columns
https://www.garlic.com/~lynn/2003o.html#43 Computer folklore - forecasting Sputnik's orbit with
https://www.garlic.com/~lynn/2004h.html#15 Possibly stupid question for you IBM mainframers... :-)
https://www.garlic.com/~lynn/2004l.html#19 FW: Looking for Disk Calc program/Exec (long)
https://www.garlic.com/~lynn/2005k.html#0 IBM/Watson autobiography--thoughts on?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

is a computer like an airport?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: is a computer like an airport?
Newsgroups: comp.protocols.tcp-ip
Date: Mon, 22 Aug 2005 19:04:47 -0600
stuff goes on in different time frames.

airports have flights arriving early, late, delayed ... and gates can quite frequently get re-assigned ... as airlines & airports attempt to juggle resources in the face of glitches.

it used to be that planes spent a lot more time circling ... ATC now tries to hold flights before take-off until there are scheduled resources available for landing.

delayed in-coming equipment can really start to mess up outgoing operations ... and the whole infrastructure gets into negative feedback ... only thing apparently saving it the overnight slowdown.

i've frequently described a flight from san jose to boston with connection in chicago. the san jose flight was delayed 30 minutes because of a 30 minute thunderstorm that was going thru chicago that affected traffic. by the time we finally landed in chicago ... flights were experiencing 4hr delays ... because of a 30min reduction in traffic/thruput first thing in the morning. infrastructure had almost no resilliancy for handling glitches; instead of the effects of a glitch being mitigated over the course of the day ... the effect was actually magnified.

of course, almost any complex system can be organized such that it gets into negative feedback with the effects of small glitches magnified as time passes (and frequently having to resort to complete system shutdown before things recover).

nominally you try and design complex systems to degrade gracefully under adverse conditions ... as well as graceful recovery.

i tried to do graceful degradation with dynamic adaptive resource management when i was undergraduate ... it was frequently called fair share scheduler ... since the default policy was fair share.
https://www.garlic.com/~lynn/subtopic.html#fairshare

trivia ... while 6/23/69 unbundling announcement started the pricing of application software ... kernel software was still free. the (later re)release of the resource manager represented the first transition to kernel software pricing
https://www.garlic.com/~lynn/submain.html#unbundle

when we were doing the original payment gateway
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we were looking at trying to translate payment transactions from a circuit based infrastructure where you could get SLA (service level agreement) contracts (with things like financial penalties for not meeting contracted service levels) to internet environment. the original cut had just been to to remap the message formats from circuit based infrastructure to packet based infrastructure that carried with it none of the normal service level processes. we were looking at formulating compensating procedures for internet operation to try and approximate circuit-based service guaranetees.

In the middle of this period the internet transitioned to hierarchical routing ... in part because of the large growth. what was left was getting redundant connections into multiple points in internet backbone ... and using multiple a-records to map all the different ip-addresses to the same payment gateway.

previously, you could have had multiple different attachments to different places in the internet backbone ... and be able to advertize lots of diverse routings to the same ip-address. with the change to hierarchical routing, the payment gateway had to have lots of different ip-addresses to that the diverse paths could be correctly routed.

misc. recent postings about trying to do business critical dataprocessing on the internet
https://www.garlic.com/~lynn/2005.html#18 IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#42 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#16 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005k.html#7 Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005n.html#30 Data communications over telegraph circuits

for some routing topic-drift ... in the past i got the opportunity to rewrite (reservation system) routes application ... getting a copy of the complete OAG for all (commerically scheduled) flight segments world-wide. random refs:
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001d.html#74 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

auto reIPL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: auto reIPL
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l
Date: Mon, 22 Aug 2005 20:26:31 -0600
Ted MacNEIL writes:
UNIX has had the function for years, since quick re-boots have been the priority, rather than fewer.

cp67 got fast reboots ... vm370 inherited it from cp67 ...

following has a story about somebody at MIT modifying cp67 resulting in cp67 crashes 27 times in one day. there is some comment about that not being possible with multics because it took multics so long to reboot.
http://www.multicians.org/thvv/360-67.html

i had written tty/ascii terminal support for cp67 while an undergraduate at the university. i had done some one byte arithmetic for calculating input line length (couldn't get long lines with teletype).

i think the story is that there was some ascii device at harvard that had something like 1200-2000 character line length ... that they needed to connect to the cp67 at mit. the mit system modified various fields to increase the maximum line length ... but didn't catch the one byte arithmetic.

some drift ... in the process of adding tty/ascii support .. i tried to extend the automatic terminal recognition from 1052 & 2741 to include tty. in theory 2702 terminal controller would allow it. initial testing sort of worked ... you could dynamically change the line scanner on each port ... and correctly figure out whether 1052, 2741, or tty and get the correct line scanner. however, 2702 had a short cut where the line-speed oscillator was hardwired to each port (allowing the correct line-scanner to be dynamically be set on a port by port basis ... but it wouldn't also change the port line-speed).

this sort of led to the university having a project to build a clone controller ... that could support both dynamic terminal type as well as dynamic line speed. somewhere there is a write-up blaming four of us for helping spawn the ibm plug-compatible clone controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

on the other hand ... when i was doing getting ready to release the resource manager .... minor trivia drive ... resource manager was guinea pig for first priced kernel software ... past postings on unbundling and software pricing
https://www.garlic.com/~lynn/submain.html#unbundle

we did some automated benchmarking process ... which included extremely severe stress testing
https://www.garlic.com/~lynn/submain.html#bench

and there were numerous things that could consistently kill vm370. as a result ... before releasing resource manager for vm370, there were serveral parts of vm370 that i had to redo ... primarily the task serialization to eliminate all sorts of timing related failures (as well as all cases of hung and/or zombie users).

not too long after releasing the resource manager ... i got the opportunity to do some stuff with the disk engineering and product test labs in bldg. 14 & 15.

the labs had quite a few "testcells" ... engineering development hardware that required all sorts of reqression testing. they were running these test with stand alone processing and dedicated scheduled processor time. they had tried doing it under operating system control ... but found that the MTBF (at the time) for MVS was on the order of 15 minutes with a single testcell.

i undertook to rewrite the i/o supervisor so things would never fail ... they were eventually not only able to do single testcell operation in operating system environment ... but eventually able to do multiple concurrent testcell testing on the same machine. misc. past posts of disk engineering lab.
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How good is TEA, REALLY?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How good is TEA, REALLY?
Newsgroups: sci.crypt
Date: Tue, 23 Aug 2005 07:57:47 -0600
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
If you mean bulk financial data (like a credit card issuer's database), the institutions that manage that kind of data are very big into following standards. Your choices are 3DES and AES. Even if Cipher XYZ is almost certainly better, it's not an option. Maybe AES has 0.001% chance of being broken and cipher XYZ has only 0.0001% chance, but you face much worse headache if you did something nonstandard and got unlucky ("why did you think you were smarter than all those experts?") than if you followed the standard and the experts turned out to be wrong. So, for that kind of application, whether TEA (or CAST, or Serpent, or RC4, or whatever) is technically good or not is absolutely irrelevant. You have to use 3DES or AES, end of story.

part of the issue isn't even technology

there are financial industry standards, at the ISO level, tc68
http://www.iso.ch/iso/en/stdsdevelopment/tclist/TechnicalCommitteeDetailPage.TechnicalCommitteeDetail?TC=68

and in the US, the ISO chartered organization is X9
http://www.x9.org/

and a couple years ago, nist shifted from requiring that all NIST standards have to be written from scratch ... to allowing direct adaptation of X9 standards.

in instances involving legal dispute .. if the institution has deployed something other than specified in some standard, the burden of proof can shift to the institution ... otherwise the burden of proof is on the other party.

there was some case in germany from a couple years ago involving somebody having lost some reasonably large sum of money and the financial institution was still using some technology that was no longer covered by standard. instead of the party having to prove it was the financial institutions fault ... the burden of proof shifted to the financial institution to prove that it couldn't have been their fault.

in that sense, rather than being a technology risk management issue, it becomes a litigation risk management issue.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Tue, 23 Aug 2005 07:35:56 -0600
Morten Reistad writes:
The statues of limitations are 30 years.

It applies to details in your service, and all classified data you may have touched. Configurations, actual clearances, personell information etc; and daily orders. For some this is a grey area, as they consider their words as officers to be binding, but there is no legal recourse to any sanctions against you after 30 years.

I was honorably discharged on April 2nd 1984, and the 30 years count from that date.

You are permitted to write essays about what you know and put them in escrow until the statute of limitations expire. They will inherit the classifications from the author(s).


we were walking down an aisle at some past supercomputer conference with a couple people who stopped and said they should go another direction ... up ahead was a booth from some location that wasn't supposed to exist (at least within recent history covered by some regulation or another, which supposedly would require them to report if they ever heard it mentioned).

and somewhat obscurely related to that location.
https://www.garlic.com/~lynn/95.html#13

and some drift on a topic that is almost related to the subject line (from comp.protocols.tcp-ip)
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Penn Central RR computer system failure?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Penn Central RR computer system failure?
Newsgroups: alt.folklore.computers,misc.transport.rail.americas
Date: Wed, 24 Aug 2005 13:15:37 -0600
hancock4 writes:
Would anyone be accurately familiar with some of the specifics of the PC's computer failure? Did they in fact try to merge two incompatible systems? Were system failures really a contributor to the bankruptcy? How sophisticated were their systems in 1968 (computers generally weren't that fancy back then like today)?

in 1970 ... B&M commuter in boston area was part of PENN bankruptcy .... cascading effect of east coast railroad bankruptcies in that time-frame
http://www.nashuacitystation.com/bostonmaine.html Boston and Main Raiload

one of the things mentioned was that there had been deferred track maint. on major east coast routes going back 10 years or more (money going for other purposes, salaries, dividends, bonuses, etc).

there were railroad ties that you could stick your finger into ... and speed limits were 10-15 mph (because of poor track condition) on sections that use to be 80mph.

there was joke about one section called the freight car graveyard out near acton ... where the track condition was so bad that speed limit for freight trains was 5pmh ... and even with that there were frequent derailments (and pieces of abandoned freight cars laying about).

i got the impression that quite a bit of the PENN system had been run that way ... lots of money going to places other than the nuts and bolts of railroad operation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Penn Central RR computer system failure?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Penn Central RR computer system failure?
Newsgroups: alt.folklore.computers,misc.transport.rail.americas
Date: Wed, 24 Aug 2005 18:34:10 -0600
John McCoy writes:
I don't know any specifics about the computers, but in general everything that the NYC did one way, the PRR did differently. Everything involved merging incompatible systems.

i remember reading an article a couple years ago about union pacific acquisition of southern pacific (1996?).

the article was about railroad gridlock trying to get harvested grain to houston out onto ships for delivery overseas.

supposedly the port and surrounding area didn't have sufficient staging area for (big?) increase in grain shipment traffic ... so southern pacific people got very cleaver about carefully staging trains thruout texas to avoid gridlock in houston area.

all of that institutional knowledge and expertise was lost in the union pacific take-over ... with union pacific attempting to do business as usual railroad operations; resulting in enormous freight car traffic jams in the houston area.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

auto reIPL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: auto reIPL
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l
Date: Thu, 25 Aug 2005 08:11:46 -0600
Anne & Lynn Wheeler writes:
we did some automated benchmarking process ... which included extremely severe stress testing
https://www.garlic.com/~lynn/submain.html#bench

and there were numerous things that could consistently kill vm370. as a result ... before releasing resource manager for vm370, there were serveral parts of vm370 that i had to redo ... primarily the task serialization to eliminate all sorts of timing related failures (as well as all cases of hung and/or zombie users).


re:
https://www.garlic.com/~lynn/2005o.html#25 auto reIPL

part of the automated benchmark process was to generated a new kernel, automated rebooting, automated bringing up a set of emulated virtual address spaces and having them running a variety of workload scripts.

a master script was built ... so the whole process could be repeated with a wide variety of different workloads and configurations ... and run unattended.

part of the benchmarking process was creating the "autolog" command ... which was then picked up and included in the base vm370 release 3.

for the final validation of the resource manager ... there was a set of 2000 benchmarks run that took three months elapsed time. the first 1000 or so benchmarks were selected from points on the surface of workload envelope and relative even distribution of points within the envelope ... with 100 or so workloads that were points way outside the workload envelope to see how things gracefully degraded.

the workload envelope was sort of built up from years of collected performance data from a wide variety of internal datacenter operations. combridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had nearly 10 years of round-the-clock performance data. there was then hundreds of other datacenters that had lesser amount of information ... but also represented quite a wide variety of workloads.

there was also an analytical performance model written in APL by the science center. some customers may have run into it dealing with salesman. HONE was a vm370-based time-sharing system that provided world-wide online facilities for all the sales, marketing and field people. various system and performance models had been developed at the science center in the early 70s. One of these evolved into the performance predictor available to sales & marketing on the HONE system. sales/marketing could acquire approximate workload and configuration information from the customer and input it into the performance predictor and ask "what-if" questions (in the 70s there was quite a bit of APL-based implementations that frequently are done in spreadsheet technologies these days ... although the APL calculation capability could be quite a bit more sophisticated). There were an extensive set of other APL-based applications on the system for sales & marketing use. For instance, starting with the 370 115/125 ... all equipment orders had to be first run thru a HONE (apl-based) "configurator".
https://www.garlic.com/~lynn/subtopic.html#hone

In any case, for something like the second thousand benchmarks (as part of calibrating and verifying the resource manager for release), the performance predictor was modified to look at the previous benchmark results and attempt to select interesting operational point for the next benchmark. Then it would kick-off the seeding of the next round of benchmark scripts, possibly a kernel rebuild and then automated reboot/reipl of the system.

note that while cp67 had automated/fast reboot ... it came up as a relatively stale system. as cp67 (and vm370) evolved, there become more and more services that had to be manually activated (after reipl/reboot). The "autolog" command that was originally developed for automated benchmarking was adopted as part of production reipl/reboot to automatically activate the various system services that were being created. This also kicked off the basis of automated operator infrastructure.

for some topic drift ... some number of posts about commercial time-sharing systems starting with cp67 and later migrated to vm370
https://www.garlic.com/~lynn/submain.html#timeshare

one of the early cp67 features for 7x24 commercial time-sharing systems (besides fast, automated reipl) ... was the use of the "prepare" command for terminal I/O.

typical 360 machines were leased and the datacenter paid based on the amount of time that was recorded by the processor meter. The processor meter ran anytime the CPU was running or anytime there was "active" I/O. early introduction of commercial time-sharing systems could be expensive for 2nd and 3rd shift operation ... since the active use of the system could be less than the lease charge based on the processor meter. changing to the "prepare" ccw I/O command for terminal I/O allowed the terminal I/O to suspend from the channel until the next character (and therefor if nothing else was going on, the processor meter would stop).

the change for the "prepare" command probably seems trivial these days ... but back then, it could represent a significant economic issue for a commercial time-sharing operation trying to offer 7x24 service.

for some other topic drift (somewhat availability/ras related) ... several years ago we were talking to one of the large financial transaction network operation. they had claimed that the two primary things responsible for their one hundred percent availability over the previous six-plus years were:
• ims hot-standby • automated operator

I think there was a study in the early 80s that showed that primary cause of system outages had shifted from hardware related to non-hardware related (software and human "mistakes").

my wife had done her stint in POK in charge of loosely-coupled architecture where she created Peer-Coupled Shared Data architecture. except for the ims hot-standby people ... it didn't see much uptake until parallel sysplex. misc. past Peer-Coupled Shared Data architecture.
https://www.garlic.com/~lynn/submain.html#shareddata

although, one might claim that some of that was leveraged when we did the ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is symmetric key distribution equivalent to symmetric key generation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is symmetric key distribution equivalent to symmetric key generation?
Newsgroups: sci.crypt
Date: Thu, 25 Aug 2005 10:29:25 -0600
daw@taverner.cs.berkeley.edu (David Wagner) writes:
Well, that's a different question! There are many possible ways. You might have that person's public key. You might have a secure channel through which you can request the public key. You might be able to ask some other trusted party ("Verisign") for that person's public key. That person might provide a certificate signed by some other trusted party ("Verisign"). One might use a hierarchical PKI (a la x509) or a web of trust (a la PGP). There are many answers, and which one is right is going to depend upon the application, upon the trust model, upon compatibility requirements, etc..

But whether you use a key transport protocol, or a key agreement protocol, doesn't change the need to answer the question, and doesn't change the set of answers available. How to get the other endpoint's public key is orthogonal to what kind of key exchange protocol you use.


note ... in general ... all of the public key related infrastructures have some out-of-band trust protocol.

basically the dependent or relying party ... has a trusted repository of (trusted) public keys that was loaded by some out-of-band trust process.

one can consider that originally trusted public key repositories consisted only of individual entity public keys.

the scenario of PKIs, certification authorities, and digital certificates ... was targeted at addressing the offline email scenario of the early 80s; an individual dialed their local (electronic) post office, exchanged email, hung up ... and then possibly had to deal with first-time communication from total stranger and had to resources available to resolve any information about the stranger. this is the "letters of credit" model from the sailing ship days.

to address this situation, certification authorities were created and individuals had their trusted public key repositories loaded with public keys of certification authorities.

people wishing to have first-time communucation with total stranger could go to (well known) certification authority and register their public key along with some other information. The certification authority would validate the provided information and package it into a digital certificate that they was digitally signed by the certification authority.

the individual now could digitally sign a message/document and package up the message/document, the digital signature, and their digital certificate and send off this first time communication to a stranger.

the stranger/recipient (hopefully) having a copy of the appropriate certification authority's public key in the trusted public key repository could validate the digital signature on the digital certificate, then (trusting the digital certificate) using the sender's public key (from the digital certificate) validate the (sender's) digital signature on the message/document. The stranger/recipient then could relate the message/document to the other information certified in the digital certificate.

one of the issues in the early 90s with the x.509 identity certificate, was what possible certified information might be of interest to future unanticipated and unspecified strangers. there was some direction to compensate (making the digital certificate more useful) by grossly overloading the digital certificate with enormous amounts of personal information.

in the mid-90s, some number of institutions were started to realize that digital certificates overloaded with enormous amounts of personal information represented significant privacy and liability issues. as a result, you started to see the appearance of relying-party only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

basically the enormous amounts of personal information was replaced with some sort of database index where all the personal information was kept.

individuals registered their public key with an institution (which was placed in a secure repository containing information about the individual), the institution then created a digital certificate containing the individuals public key and an index to the database/repository entry. a copy of this digital certificate was given to the individual.

for transaction scenar, the individual created a transaction, digitally signed the transaction and packaged the transaction, the digital signature and their digital certificate and transmitted it back to the institution.

the institution took the transaction, extracted the database/repository value from the transaction, retrieved the database/repository entry, and used the public key in their repository to validate the digital signature.

the digital certificate need never, ever be referenced ... and therefor it was relatively trivial to show that the digital certificate was redundant and superfluous.

The other way of showing that the digital certificate was redundant and superfluous was that the relying-party-only certificate violated the basic PKI/certification model ... being required for first time communication with a stranger. When the receiving party already had all the possible information ... there was no point including the digital certificate in the transmission.

there was a 3rd approach. in the financial transaction world, the typical payment transaction is on the order of 60-80 bytes. the financial industry relying-party-only digital certificates of the mid-90s could add 4k-12k bytes certificate overhead to the basic transaction. Not only were the replying-party-only digital certificates redundant and superfluous ... they could represent an enormous payload bloat of a factor of one hundred times.

so x9 financial standards group had an effort for "compressed" digital certificates. however, a perfectly valid information compression technique is to not include any fields that the recipient is known to already have. since it can be shown in these cases that the recipient has a superset of all fields in the digital certificate, it is possible to compress such digital certificates to zero bytes. rather than claiming it is unnecessary to include redundant and superfluous digital certificates in every transmission, an alternative is to demonstrate that it is possible to include zero-byte digital certificates in every transmission.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Thu, 25 Aug 2005 13:44:51 -0600
rpl writes:
ASCI-Q (the 8,192 processor Alpha system) is an HP SC-45. Most of the top computers on the www.top500.org list are built by the companies that make the chip; I don't recall DEC doing anything in that vein, but the (mega)multi-processor fad(?) didn't really start 'til '97 or '98ish. Before that it was mostly Cray with occasional one-offs by Big Blue.

So... what's a "supercomputer" ?


we had a start on it with ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and then it got transferred and we were told we weren't suppose to work on anything with more than four processors. minor reference
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is symmetric key distribution equivalent to symmetric key generation?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is symmetric key distribution equivalent to symmetric key generation?
Newsgroups: sci.crypt
Date: Thu, 25 Aug 2005 11:44:51 -0600
re previous post:
https://www.garlic.com/~lynn/2005o.html#31 Is symmetric key distribution equivalent to symmetric key generation?

the issue in PKI, certification authorities, and digital certificates doesn't actually eliminate

1) the recipient having their own trusted public key repository that has public keys registered and loaded by some out-of-band trust process

and/or

2) the sender needing to go thru some vetting process in the registration of the sender's public key

it is just that the processes have been moved around and re-arraigned.

the issue is to address the scenario where the recipient has no prior dealings with the sender, has no local repository about the sender and/or has no online mechanism to contact about information regarding the sender.

the digital certificate is a stale, static representation of a process that was performed at some time in the past ... and the receiver has no other means possible for accessing the results of that process.

the digital certificate becomes redundant and superfluous when

1) the receiver is also the party responsible for the registration and validation of the sender's information ... or

2) the receiver has a local repository representing the registration and validation of the sender's information ... or

3) the receiver has direct online access to some other trusted entity responsible for registration and validation of the sender's information.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Not enough parallelism in programming

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Not enough parallelism in programming
Newsgroups: comp.arch
Date: Thu, 25 Aug 2005 21:12:00 -0600
"JJ" writes:
I think APL has been used in business for ever, lots of job ads mentioned it. Its funny that Smalltalk-NeXTstep have Obj-C in common too, without which NeXT probably wouldn't have happened.

APL is a neat language, at one time it was also used as a hardware description language for a very old comp arch text "digital computer systems principles" by Hellerman 1967,1973. It uses APL more to describe microcode snips than to actually model hardware. A section on IBM 360,370 architecture in there too.


cambridge science center ported part of apl\360 to cms for cms\apl. apl\360 had a monitor and its own multitasking swapping monitor. typical apl\360 workspaces were 16k to 32k bytes ... and the multitasking monitor would swap the whole workspace at a time.
https://www.garlic.com/~lynn/subtopic.html#545tech

apl\360 had memory management that on every assignment assigned the next available space (in the workspace) ... and marked any previous allocation unused. when allocation reached the end of the workspace, it would do garbage collection and start all over.

moving to cms\apl, the garbage collection had to be completely reworked. cms\apl might allow several mbyte virtual memory page workspace. the apl\360 strategy would appear like a page thrashing program running in a "large" virtual memory environment. the other notable thing done for cms\apl was a mechanism that allowed interfacing to system calls. this caused some consternation among the apl aficionados since it violated the purity of apl. this wasn't rectified until the introduction of shared variables as a mechanism for interfacing to system calls/functions.

early on, the science center provided some "large-memory" online apl service to other parts of the corporation. one of the customers were the business people from corporate hdqtrs which was using the large memory apl capability to do what-if scenarios using the most sensitive of corporate business information. this also offered some security challenge because the was amount of access to the science center system by students from various univ. & colleges in the boston area.

there was quite a bit of apl use for things that were later done using spreadsheets.

one of the largest commercial time-sharing operations
https://www.garlic.com/~lynn/submain.html#timeshare

become the internal HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

which provided world-wide online dataprocessing services to all marketing, sales, field people. the HONE environment offered services were almost all implemented in APL (starting with cms\apl, involving into apl\cms, then apl\sv, etc). one specific application developed by the science center was the performance predictor which was a detailed analytical system operation. salesmen could obtained workload and configuration information from a customer and then do what-if questions about changes in workload and/or configuration.

starting with 370 115/125, a salesman could no longer even submit an order w/o it having been processed by a HONE "configurator".

HONE was one of my hobbies for quite some time. when emea hdqtrs moved from westchester to la defense outside paris ... i got to do some amount of the work getting HONE up at running at the new data center (at that time, there were three new bldgs ... not completely finished, and the grounds around the bldgs was bare dirt .... landscaping work was still in progress).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Implementing schedulers in processor????

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Implementing schedulers in processor????
Newsgroups: comp.arch.embedded,comp.arch
Date: Fri, 26 Aug 2005 10:14:02 -0600
Tom Twist writes:
I'll have to disagree with you there.

According to Wikipedia : https://en.wikipedia.org/wiki/Microcode

"A microprogram is a program consisting of microcode that controls the different parts of a computer's central processing unit"

This means that the scheduler is implemented in microcode, like a "superinstruction", making it easier to implement a RTOS.


a microprogram is a program implemented in microcode and can be pretty much anything ... but typically is targeted at provided layer between some software and "real" hardware (frequently analogous to operating systems providing a layer between applications and the supposedly real hardware).

Recently I got some email from somebody that was asked to do presentation on the *new* virtualization technology ... and they vaguely remembered some presentation that I gave in the 70s or 80s mentioning that the virtualization control program being the microcode of the virtual machine.

I had done a bunch of stuff as an undergraduate in the 60s. This week in Boston, SHARE is having its 50th anniv. meeting.
http://www.share.org/

Following is part of a virtualization presentation I gave at the fall68 share meeting in Atlantic City
https://www.garlic.com/~lynn/94.html#18

One of the things I had done was a hack to the I/O interface that allowed the virtual machine to specify a special I/O operations that significantly reduced the overhead of emulated disk I/O. After I graudated and joined the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

I got beat around the head for violating the hardware architecture specification. The hardware architecture specification didn't provide/cover the way I had done it. It was then drilled into me that the hardware diagnose instruction was defined as being model specific implementation (aka undefined in the hardware architecture specification). A fiction was created for a *virtual machine* model and then a whole slew of new virtual machine specific features were implemented in microcode programs invoked by the diagnose instruction.

For the ECPS project ... the "new" 370 138/148 (not yet announced) was going to have 6kbytes of available microcode store. A detailed hot-spot analysis of the kernel was done ... and the highest used 6k bytes of kernel instruction was selected for moving to microcode. Results of that initial study
https://www.garlic.com/~lynn/94.html#21

The 6k byte cut-off represented approx. 80 percent of overall kernel cpu utilization (see above posting)/

The low & mid-range 370s ... were vertical microcode machines ... and tended to avg. 10 microcode instructions for every 370 instruction. ECPS basically defined a new hardware instruction for each block of kernel code migrated to microcode. The kernel software then effectively replaced the sequence of software kernel code with the corresponding new hardware instruction. In the ECPS scenario, the overall scheduler wasn't actually implemented in microcode ... but some specific pieces of the dispatching code was dropped into the machine microcode (just about on a byte-for-byte basis and achieving 10-to-one speed up).

about the same time, I was also working on a 5-way SMP project ... also involving a microcoded machine.
https://www.garlic.com/~lynn/submain.html#bounce

for this, I migrated the management of virtual address space dispatching into the microcode. A interface was defined between the kernel software and the machine micrcode that was basically the ordered list of dispatchable units. The kernel software put work units on the list ... the microcode of the different processors took work units off the list and executed them (using thread safe compare&swap). This was akin to some of the stuff that the intel 432 was doing later on.

all sorts of past posts related to microcode
https://www.garlic.com/~lynn/submain.html#mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Penn Central RR computer system failure?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Penn Central RR computer system failure?
Newsgroups: alt.folklore.computers,misc.transport.rail.americas
Date: Sat, 27 Aug 2005 08:49:22 -0600
jmfbahciv writes:
Not in Massachusetts. People do not believe in honoring traffic lines painted on the road.

or any traffic reg.

i was told that yellow light in boston means speed up. one of the first times i was driving in boston ... i stoped at an intersection just as the light turned red and the 3 cars behind me, swerved around and went thru the intersection. it used to be that lights turned green almost simultaneously with the lights in the other direction during red ... then they started delaying the green light change ... however people started anticipating the delay in the green light ... so the delay was increased to compensate for the people who were going thru the red light anticipating that the cars in the other direction wouldn't have started.

massive traffic jams on the interstate would at least still allow first responders to travel on the shoulder (in cases involving accidents) ... until more and more people were also speeding by on the shoulder ... until the shoulder is also part of the traffic jam.

there was an article in the early 70s contrasting cal. & mass. the same week the cal. legislature passed a bill requiring (cal) state patrol have a junior col. degree ... mass. legislature defeated a bill requiring (mass) state patrol have a high school degree.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Sun, 28 Aug 2005 10:14:51 -0600
Steve O'Hara-Smith writes:
There was a range of 88K UNIX boxes marketed by both Motorola and Philips, they went up to a quad processor box. At a PPOE we caused some surprise when it was realized that the 20 quad 88K boxes we were ordering was for an essentially single user application - we really hadn't thought of it that way - there was just a lot of data to process in a limited time :)

romp & rios were risc with no provisions for cache consistency ... i've commented that 801 design was somewhat reaction to hardware problems in the 70s
https://www.garlic.com/~lynn/subtopic.html#801

1) FS (future system) in the early 70s was on target to replace 360/370 and had a very, very complex instruction and machine architecture
https://www.garlic.com/~lynn/submain.html#futuresys

FS was eventually canceled w/o even being announced. in some sense, 801 was going to the opposite extreme from FS, very simple instruction and machine architecture.

2) 370 caches had very strong smp memory consistency and paid a high hardware overhead. 801 eliminated the cache consistency and high hardware overhead by not supporting smp and cache consistency

we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

... in part to get power/rios scale-up ... since 801 didn't support smp scale-up ... effectively the only approach left was doing cluster scale-up.

in any case, when somerset started (joint ibm, motorola, apple, etc effort) to produce power/pc, the executive we reported to, transferred over to head it up (he had previously come from motorola). one way of characterizing power/pc was taking the 801 core and marrying it with 88k cache & memory bus operation to support smp.

some number of past posts about (mostly 360 & 370) smp
https://www.garlic.com/~lynn/subtopic.html#smp
and a specific, heavily mcoded 370 smp project
https://www.garlic.com/~lynn/submain.html#bounce

slightly related recent post (although mostly m'code oriented)
https://www.garlic.com/~lynn/2005o.html#35 Implementing schedulers in processor????

some amount of the cluster scale-up work ... minor reference
https://www.garlic.com/~lynn/95.html#13

was based on my wife's earlier stint in POK in charge of loosely-coupled architecture (mainframe for cluster). she had come up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

except for ims hot-standby, didn't see much (mainframe) uptake until parallel sysplex ... some misc. refs:
http://www-03.ibm.com/servers/eserver/zseries/pso/

the Peer-Coupled Shared Data was combining both availability and processing scale-up. a couple recent posts somewhat related to availability
https://www.garlic.com/~lynn/2005o.html#25 auto reIPL
https://www.garlic.com/~lynn/2005o.html#30 auto reIPL

for some slight tandem related lore. tandem moved to MIPS based chips (which had been bought by SGI ... but still operated as an independent business). in the mid-90s, the executive (we had reported to when were were doing ha/cmp) ... and had moved over to head up somerset, then left to be president of MIPS.

compaq had bought both dec and tandem ... here's some past news about tandemo moving from mips to alpha
http://www.answers.com/topic/tandem-1
http://www.answers.com/topic/tandem-computers
http://www.theregister.co.uk/1998/08/20/mips_chip_flip_flops_as/
http://www.theregister.co.uk/1999/01/26/a_year_ago_mips_had/

one of the people that I had worked with during system/r days (original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

had left sjr and joined tandem. some minor related refs:
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004l.html#28 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#31 Shipwrecks
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns

for even more drift ... here is reference to a jan99 2day conference at tandem in cupertino, sponsored by the head of marketing
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

SHARE reflections

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHARE reflections
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 28 Aug 2005 13:20:16 -0600
Alan.C.Field writes:
SHARE was great. Often by about Thursday I'm ready to go home but not this time.

No one has mentioned the "best" give-away - an IBM Green Card.

This alone was worth the price of admission. I commented in one session "the reproduction Green card" to which someone replied "How do you know it's a reproduction?".

The cardboard is more flimsy than I recall my original was.


i still have 3-4 (360 reference) green cards (GX20-1703) as well as two 360/67 "blue" cards (229-3174)

One "blue" card, I borrowed it from "M" at the science center (it still has his name stampped on it) ... 4th floor, 545 tech sq:
https://www.garlic.com/~lynn/subtopic.html#545tech

besides doing virtual machines ... recent reference to Melinda's history & mentioning Bob Creasy's work:
https://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP

the internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

technology also used for bitnet and earn networks
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Compare&Swap instruction ("CAS" is the initials of the person at the science center primarily responsible)
https://www.garlic.com/~lynn/subtopic.html#smp

GML was also invented at the science center by "G", "M" (one of my 360/67 blue cards has his name stamped on it), and "L", which then morphed into sgml, html, xml, etc (all the "markup" languages):
https://www.garlic.com/~lynn/submain.html#sgml

there also are a bunch of other "cards" ... several 370 reference "yellow cards", a number of "internal use only", 1980, REX yellow cards (before it was released as a product and renamed REXX), an orange 1980s VMSHARE card. vmshare archives can be found at:
http://vm.marist.edu/~vmshare/

which was originally provided by tymshare starting in the 70s ... some collected posts about cp67/vm370 commercial time-sharing services
https://www.garlic.com/~lynn/submain.html#timeshare

There is also an "internal use only" AIDS Reference Summary yellow card, ZZ20-2341-4, Jan, 1973). There were growing number of APL-based applications supporting field, sales, and marketing. They were available on internal timesharing service HONE ... starting with CMS\APL on CP67 (evolving into vm370 along with APL\CMS, APL\SV, VS\APL, etc). from the card
IBM Aid Programs are proprietary and are classifed as either Marketing (M), Services (S), Marketing and Services (MS), Testing (TI) or Internal (I) usage as defined below. IBM Aid Programs and their documentation may not be made available to customers. Instructions regarding the usage permitted for a specific IBM Aid Program and its output are supplied with the program.

... snip ...

in 77, the US HONE datacenters were consolidated in Cal. (as it happens not very far from the tymshare datacenter) ... and clones of the US HONE operation were springing up all over the world.
https://www.garlic.com/~lynn/subtopic.html#hone

in the late 70s, the consolidated US HONE datacenter had possibly the largest single system image cluster in the world. In part because of earthquake concerns, the consolidated US HONE datacenter in 1980 than had replicated subset first in Dallas, and then a 2nd replication in Boulder (with fall-over and load balanching working between the three datacenters).

Part of our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

grew out of my wife's stint in POK in charge of loosely-coupled architecture ... where she came up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
semi-related recent post
https://www.garlic.com/~lynn/2005o.html#32 What ever happened to Tandem and NonStop OS?
https://www.garlic.com/~lynn/2005o.html#37 What ever happened to Tandem and NonStop OS?

but also some of it grew out of the experience with clustering HONE and then having geographically replicated fall-over & load-balanacing sites. misc posts on continuous availability, disaster survivability and geographic survivability
https://www.garlic.com/~lynn/submain.html#available

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

JES unification project

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JES unification project
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 29 Aug 2005 11:12:27 -0600
bob.shannon@ibm-main.lst (Bob Shannon) writes:
There is no JES unification project at SHARE. IBM has given up on unifying JES. From the SHARE perspective it would appear to make sense to combine the JES2 and JES3 Projects, but one would wind up with a nominal JES Project Manager and two committees, one for JES2 and one for JES3. Since the JES3 Project is smaller, some of the common interest stuff e.g., print, has been moved there. The JES3 project (under Ed Jaffe) and the JES2 project (Jack Schudel) work well together. Come to Seattle and observe first hand.

before getting con'ed into going to POK to be in charge of loosely-coupled architecture ... my wife worked in the JES group and was one of the "catchers" for ASP ... having to turn out type-1 documentation and other odds & ends for the JES3 transformation.

misc past posts regarding her POK stint
https://www.garlic.com/~lynn/submain.html#shareddata

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Certificate Authority of a secured P2P network

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Authority of a secured P2P network
Newsgroups: netscape.public.mozilla.crypto
Date: Tue, 30 Aug 2005 13:42:09 -0600
Ram A Moskovitz writes:
It depends. Do you need third party identity verification? What is the value of protecting the root key (do you have a hardened key storage device if you need one)? Is privacy a concern?

an issue is what does the digital certificate represent ... if it just has some character string representing some information, a public key, and a valid digital signature from some 3rd party certification authority .... and that all digital certificates with valid digital signatures from that certification authority are treated as valid for the secured P2P network ... then possibly unanticipated digital certificates from that same 3rd party certification authority will be treated as valid (what discriminates a digital certificate for that specific secured P2P network from all digital certificates that may have been issued by that certification authority?).

as an aside ... certificate authority is short-hand for certification authority .... the digital certificate is a representation of the certification process performed by the certification authority ... somewhat analogous to diplomas that some people might hang on their wall. Except for some institutions called *diploma mills* ... the thing on the wall isn't a thing unto itself ... it is a representation of a specific process. It is intended for simple and/or low-value operations where the relying party has no other recourse to directly access the real information. For high value/integrity operations ... instead of relying on the representation of the process, the relying party will tend to directly access the real infoformation.

a type of original design point for certification authorities and digital certificates ... was that the certificaiton authority would certify that the entity has a valid login to the system and the permissions the entity would have while logged onto the system .. and also certify the public key that the relying party/system should use for authenticating the entity.

somebody could present a digital certificate from the correct certification authority and the relying system would allow them the corresponding access ... w/o having to maintain a list of valid logins and/or their permissions ... since the digital certificate would already carry that certified information.

the public key design point for more real-time systems would have the infrastructure registering a public key in lieu of a pin or password (for authentication) ... w/o requiring a digital certificate
https://www.garlic.com/~lynn/subpubkey.html#certless

like a radius or kerberos authentication infrastructure simply upgraded for digital signature and public key operation w/o requiring any sort of independent certification authority
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

the authentication and permissions are built into the basic system w/o requiring independent/outside certificaiton.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Certificate Authority of a secured P2P network

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Authority of a secured P2P network
Newsgroups: netscape.public.mozilla.crypto
Date: Tue, 30 Aug 2005 16:32:56 -0600
Ram A Moskovitz writes:
I wonder why Mastercard/VISA didn't and don't choose to run everything themselves in house.

Mastercard/VISA are brand associations. also when they started, there were possibly 30k banks possibly each issuing and acquiring credit cards ... with their own processing.

basically there are two variables for four possible conditions:

• physical and offline • physical and online • electronic and offline • electronic and online

when things first started, the operation was physical (plastic card) and offline (paper processing). there were invalidation booklets that were mailed to all merchants monthly containing all invalid/revoked/etc account numbers.

To some extent the CRLs defined for PKIs were modeled after the physical and offline paradigm invalidation booklets from the 60s. The CRLs also somewhat differentiate a PKI operation from a purely certificate manufacturing infrastructure ... a term we coined in the mid-90s when we started working with this small client/server startup that wanted to do payments
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

The problem with the 50s & 60s paradigm was the inability for the CRL paradigm to scale-up ... the numbers of invalid accounts were increasing dramaticly (and therefor the aggregate risk from invalid accounts) and the number of merchants were increasing dramatically. The result was that you had to send out much larger booklets to a much larger number of merchants on a much more frequent interval (i.e. risk is in part proportional to the distribution interval). Pretty soon you were looking at dristributing booklets with thousands of pages to millions of merchants every couple of hrs.

Now the niche that the Certificatin Authorities, PKIs, and digital certificaes fit into is the "electronic and offline" paradigm. The payment industry bypassed that archaic step and went directly to electronic and online; a magnetic stripe was put on the same plastic card and infrastructure was deployed to perform real-time, online transactions. Not only did they get real-time authentication and authorization but they were also able to get operations dependent on data aggregation (like credit limit, patterns of usage that indicate fraud patterns). The PKI model with a stale, static digital certificate would have had them imprinting the physical card ... with the card's credit limit ... but the stale, static digital certificate PKI model has no mechanism for indicating outstanding charges and/or whether the aggregate of past transactions exceed the credit limit.

Somewhat after we had worked on this thing that has since come to be called electronic commerce ... there were a number of people suggesting that credit card transactions could be moved into the modern world by having card owners digitally sign the transaction and attached a digital certificate. Our observation was that the digital certificate attachment would have actually regressed the payment transactions by 20-30 years to the 50s ... as opposed to moving it into the modern environment.

The x.509 identity certification business of the early 90s was starting to wonder what information a business or relying party might actually need about the person ... not being able to accurately predict what information might be needed, there was some direction to grossly overload identity certificates with enormous amounts of personal information ... in hopes that a relying party or business might find some information in the certificate of some use.

By the mid-90s, some number of institutions were starting to realize that x.509 identity certificates, grossly overloaded with enormous amount of personal information represented significant privacy and liability issues. As a result, you started to see some appearance of relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

which basically only contained some sort of repository lookup value (a repository that contained all the real information) and a public key. However, it is trivial to show that such an implementation both violates the original design point justifying PKIs and digital certificates but that they were also redundant and superfluous.

Now with regard to the association brands. As the transition of the payment card industry to a realtime, online processing model (as opposed to the archaic offline processing model represented by PKIs, certification authorities and digital certificates), an emerging requireming was that all of the independent bank processing operations needed to be interconnected. So one of the things that you started seeing appearing was the associations deploying in the 70s and 80s was a value added network (in general, value added networks saw a big increase in the 80s ... but have since somewhat died off, having been obsoluted by the internet). The band associations have maintained their value added network infrastructure ... even in the face of the internet having generally obsoleted most other value added network operations.

The current processing outsourcing to is not by the brand associations ... which still operate their value added networks to tie together the multitude of bank datacenters (getting realtime transactions from the merchant bank to the consumer bank and back) ... but by the banks ... where they have outsourced their whole processing operation. In this situation, it is different from outsourcing certification and trust ... separated from the actual transaction processing (which simple security theory indicates can lead to fraud) ... but all aspects of the online, realtime processing has been outsourced.

So looking at a digital certificate related scenario ... the ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

this small client/server startup that wanted to do payment transactions (which is now frequently referred to as electronic commerce) had this stuff called SSL that was to be used in the payment processing. So as part of the deployment we had to go around and audit the major institutions called certification authorities (this is when we coined the term *certicate manufacturing* to distinguish the operations from the thing that is commonly called PKI in the literature).

so the justification for SSL certificates was because of percieved integrity weaknesses in the domain name infraustructure and the possibility that a client might be talking to a webserver different than the webserver they thought they were talking to.

So a merchant applies for a SSL digital certificate for their webserver internet name. Then in the SSL protocol ... the browser validates the webservers SSL digital certificate and then compares the webserver name that the client typed in against the name in the digital certificate.

It turned out there are a couple problems.

A lot of merchants found out that using SSL cut their webserver performance by 80-90percent. As a result they started only using SSL for the payment/checkout phase of shopping. The problem is that the browser no longer checks what the client typed in against a SSL certificate. When the client gets around to payment/checkout, they click on a button ... that does the URL stuff automatically. Now if the client had been visiting a fraudulent/incorrect website site (which SSL is designed to prevent), such a fraudulent website is likely to have their payment/checkout button specify a URL for a website name for which they have a valid certificate (defeating the original purpose for having SSL).

Another issue is that most certification authorities aren't actually the authoritative agency for the information being certified (which, in turn, the digital certificate represents that certification business process). In the SSL scanario, the SSL certification authorites have to check with the authoritative agency for domain name ownership. An issue is that the authoritative agency for domain name ownership is the domain name infrastructure ... which has the original integrity issues justifying the use of SSL.

So somewhat backed by the SSL certification authority industry, there has been a proposal that domain name owners register a public key for their domain name ... and all future correspondence is digitally signed by the corresponding private key. The domain name infrastructure then can retrieve the onfile public key to verify the digital signature (as countermeasure against various fraudulent manipulation of the domain name infrastructure) ... note this is a certificate-less operation:
https://www.garlic.com/~lynn/subpubkey.html#certless

There is also some additional benefit to the SSL certification authority industry; they can replace the time-consumer, expensive and error prone process of acquiring identification information from the SSL certificate applicant and attempting to match it against the identification information on file in the domain name infrastructure. Instead they can have a much simpler, reliable and less expensive authentication process ... where the SSL certificate applicant digitally signs their application and the SSL certification authority retrieves the onfile public key from the domain name infrastructure and validates the digital signature:
https://www.garlic.com/~lynn/2005o.html#40 Certificate AUthority of a secured P2P network

Unfortunately this represents something of a catch-22 for the SSL certification authority industry

1) increasing the integrity of the domain name infrastructure lessens the original justification for having SSL digital certificates

2) if the SSL certification authority industry can base their whole trust root on using onfile public keys for digital signature verification (totally certificate-less), it is possible that the rest of the world could start also doing real-time, online retrieval of onfile, certificate-less public key operation.

One could even imagine a highly optimized, certificate-less SSL protocol ... where the response to the hostname not only carried the ip-address but also piggybacked any available public key and any other applicable SSL protocol information (in the same transaction).

Then in the initial client session setup with the webserver ... they can include the (public key) encrypted symmetric session key and the actual request. The webserver then can both do session setup and process whatever request in single operation.

Here is a scenario where improved integrity can bring any outside certification operation intended for an offline environment back in house ... drastically simplifying all aspects of the operation, moving to an online, real-time paradigm, and improving overall integrity.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: aus.legal,aus.politics,aus.tv,misc.survivalism
Subject: Re: Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc...
Date: Tue, 30 Aug 2005 21:35:44 -0700
Sylvia Else wrote:
How so? When you obtain a certificate for your public key from Verisign, you never disclose your private key to them - a certificate request does not include it. Since they never know what it is, they cannot forge your digital signature.

there is more than one way to skin a cat.

partially related recent post in cryptography mailing list
https://www.garlic.com/~lynn/aadsm20.htm

the basic technology is asymmetric key cryptography, basically one key (of a key-pair) can decode what the other key (of the same key-pair) encodes; as opposed to symmetric key cryptography where the same key ecnrypts and decrypts the information.

there is a business process called public key where one key is identified as *public* and made available (can be registered in lieu of a password as in the radius scenario mentioned in the above referenced posting URL), and the other key is identified as *private*, kept confidential and never divulged.

there is a business process called digital signature where a hash of a message/document is calculated and then encoded with the private key. the message/document can be transmitted along with the digital signature. the recipient recalculates the hash of the message/document, decodes the digital signature with the corresponding public key and compares the two hashes. if the two hashes are the same, then the recipient assumes

1) that the message/document hasn't been modified since the digital signature

2) something you have authentication, aka the sender has access to and use of the corresponding private key.

PKIs, certification authorities, and digital certificates address the first time communication with a total stranger scenario ... something of the "letters of credit" scenario from the sailing ship days; a couple recent postings on the subject:
https://www.garlic.com/~lynn/2005o.html#2 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005o.html#6 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#7 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#9 Need a HOW TO create a client certificate for partner access
https://www.garlic.com/~lynn/2005o.html#17 Smart Cards?
https://www.garlic.com/~lynn/2005o.html#31 Is symmetric key distribution equivalent to symmetric key generation?
https://www.garlic.com/~lynn/2005o.html#33 Is symmetric key distribution equivalent to symmetric key generation?

in fact, it is possible to have a perfectly valid certificate-less digital signature authentication infrastructure that has absolutely no need to resort to certification authorities
https://www.garlic.com/~lynn/subpubkey.html#certless
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

nominally, digital signatures were developed for authentication purposes. However, there have been some proposals to extend their use for purposes similar to human signatures. Human signatures carry the implication of read, understood, agreed, approved and/or authorized. However, there is nothing within the standard digital signature process that would imply any of those conditions are satisfied.

there may also be some semantic confusion because the term human signature and the term digital signature both contain the word signature

there is also a dual-use vulnerability here.

common digital signature authentication process will have the server sending some random data for the client to digital sign and return (part of a countermeasure against replay attacks on the server). the client takes the random data and digital signs it, returning the digital signature (for authentication purposes) w/o ever examining the random data.

if the fundamental digital signature authentication infrastructure has been compromised with the possibility of also applying digital signatures in the human signature sense ... an attacker substitutes a valid transaction or contract for random data transmitted to the client. The client then digitally signs the data w/o ever examining it, believing it to be random data as part of an authentication protocol. The result would appear to be a perfectly valid digital signature applied to a perfectly valid transaction/contract.

some past posts mentioning digital signature dual-use vulnerability:
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005e.html#42 xml-security vs. native security
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005j.html#64 More on garbage
https://www.garlic.com/~lynn/2005k.html#56 Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
https://www.garlic.com/~lynn/2005l.html#20 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005o.html#9 Need a HOW TO create a client certificate for partner access

another attack on PKI infrastructure related to confusing digital signatures and human signatures was the non-repudiation bit defined for x.509 digital certificates. now the nominal PKI digital certificate scenario is that you have some sort of message/document, the digital signature and the attached digital certificate. Now, since there is no protocol proving which digital certificate that a sender actually attached to any specific message ... there was a non-repudiation attack.

the proposal was that clients would pay $100 per annum to a certification authority and get back a digital certificate possibly with the non-repudiation bit set. then whenever a merchant had a digital signed transaction and could produce some corresponding digital certificate with the non-repudiation bit set, then the whole transaction was taken as being equivalent to a human signature and in any dispute the burden of proof would shift from the merchant to the customer. part of the issue is that there is nothing in the standard PKI digital signature process that provides proof and integrity as to the actual digital certificate attached to any specific transactions ... so if a merchant could find any valid digital certificate (for the specific public key) with the non-repudiation bit set ... they could produce it as indicating the burden of proof has shifted to the consumer.

as more and more people realized that there is very little cross-over between a *digital signature* and a human signature (implying read, understood, agrees, approves, and/or authorizes), the digital certificate non-repudiation bit has fallen into disfavor.

Now a frequently occurring digital certificate is the SSL domain name digital certificate ... this supposedly allows the browser to check whether the webserver the client believes it is talking to is actually the webserver it is talking to
https://www.garlic.com/~lynn/subpubkey.html#sslcert

basically the ssl domain name digital certificate has the webserver's domain name ... and the browser checks that the domain name in the certificate is the same as in the URL that was typed in by the consumer.

there is this old adage that security is only as strong as its weakest link.

now, typically certification authorities aren't the authoritative agency for the information they are certifying (and the digital certificate is a stale static representation of the certification business process). When a certificate requires is presented, the ceritification process typically involves checking with the authoritative agency responsible for the validity of the information being certified.

so to get a ssl domain name digital certificate ... the certification authority has to validate that you are the correct owner of that domain ... they do this by contacting the domain name infrastructure to find out who is the correct owner of that domain. so one attack/vulnerability is called domain name hijacking ... where the domain name infrastructure is convinced to update their database with the name of some front company that has been specifically created for this purpose. Then a new ssl domain name digital certificate is applied for with the name of the front company. The PKI certification authority checks with the domain name infrastructure as to the correct owner and then generates a digital certificate for the hijacking organization for that domain name.

Neither the public key nor the private key of the rightful domain name owners have been compromised ... just that a perfectly valid digital certificate has been issued to the hijacking organization for their public/private key.

So the domain name infrastructure is possible the weakest link in the whole ssl domain name digital certificate infrastructure ... because of various vulneraiblities like domain name hijacking. Somewhat with the backing of the ssl domain name certification authority industry there is a proposal that domain name owners also register their public key when they obtain a domain name. Then all future communication with the domain name infrastructure is digitally signed ... and the domain name infrastructure can verify the digital signature with the onfile public key (authenticating all communication somewhat as a countermeasure to vulnerabilities like domain name hijacking) ... note that this is a certificate-less operation and requires no digital certificate:
https://www.garlic.com/~lynn/subpubkey.html#certless

This also creates an opportunity for the certification authority industry, they can also require that all SSL domain name certificate application be digitally signed. Then the certification authority can replace an expensive, time-consumer, and error-prone *identification* process with a much simple, straight-forward and less expensive *authentication* process (by retrieving the onfile public key from the domain name infrastructure and validating the digital signature).

Note that this does create something of a catch-22 for the certification authority industry. First it highlights that the real trust root for SSL domain name certificates is the domain name infrastructure, not the certification authority. Second, it opens up the possibility that others might find the real-time, online retrieval of onfile public keys to be useful and eliminate the need for ssl domain name certificates totally.

some misc. past posts mentioning non-repudiation:
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#8 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#9 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#11 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#12 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm16.htm#14 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#17 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#18 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#23 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#3 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#5 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#40 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#41 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#43 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#44 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#46 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#50 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#51 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#52 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#54 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#59 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#72 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#73 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#36 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005m.html#42 public key authentication
https://www.garlic.com/~lynn/2005m.html#53 Barcode Email

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Wed, 31 Aug 2005 11:09:41 -0600
Greg Menke <gregm-xyzpdq@toadmail.com> writes:
NT employs all the usual virtual memory infrastructure (not well tuned- it will be paging things around for no apparent reason even when no memory competition is present), and exploits the system MMU to protect programs from each other.

Perhaps I don't understant what "all memory available to one" means. NT doesn't implement quotas, so programs can consume all available memory- long before which point the OS slowly but surely grinds to a halt. But theres nothing stopping the user from running multiple programs- its just difficult to manage them from an interface perspective.


when i was doing page replacement algorithms as undergraduate in the 60s
https://www.garlic.com/~lynn/subtopic.html#wsclock

one of the principles was to perform overhead directly related to some activity and/or resource constraint.

at the time, i noticed that the version of tss/360 ... when activating a tasked characterized as "interactive" one first attempt to move all the related virtual pages from 2311 (moveable arm) disk to the 2301 (fixed head) drum before actually starting the task running ... and then when the task suspended ... move all the related pages off the 2301 drum back to the 2311 ... even when there was no contention for 2301 space.

part of this was that i was well into dynamic adaptive algorithms at the time ... not only for virtual memory scheduling resources but also for dispatching and managing other resources. this is in part, where the "wheeler" and "fair-share" scheduler originated
https://www.garlic.com/~lynn/subtopic.html#fairshare

... although the scheduler wasn't specifically fair-share ... it had resource policies ... where a possible default policy was fair-share.

in the very late 70s ... I got contacted by somebody from POK that had made a major change in MVS to not arbritrary flush pages from real memory back to disk when task quiesced/suspended in cases when there wasn't contention for real memory. He had gotten a large corporate award for the change ... and wanted to get another award for doing the same thing to vm370. I commented that I had never, not done it that way ... dating back to when I first started ten years earlier. Furthermore I had tried to convince the people that were responsible for the SVS (and then MVS) implementation to not do it that way back in the early 70s ... and they insisted on doing it anyway.

i may have even commented that in addition to given an award to the person that fixed it ... that possibly at least the amount of the award should be deducted from the compensation of the people that had insisted on doing it wrong in the first place.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Intel engineer discusses their dual-core design

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intel engineer discusses their dual-core design
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch,alt.folklore.computers
Date: Wed, 31 Aug 2005 12:30:44 -0600
Bill Davidsen writes:
They sure have sold you that "you need 64 bit" hype, haven't they? That's why Intel couldn't use P-M, AMD convinced people it was necessary to have 64 bits. HT is a free way to get 15-30% more performance out of a CPU, if that's a bust I wish someone would bust my gas mileage.

however, kernel smp overhead has been notorious for adding 15-30% overhead ... which can result in a wash.

from 30+ years ago ... there was a project to add a second i-stream to 370/195. the issue was that 195 drained the pipeline on branches and most codes ran at around half pipeline and half peak thruput. the hope was that 2nd i-stream could get close to double hardware thruput (for wide-range of codes) ... more than enuf to compensate for any incremental kernel overhead going to smp kernel.

it was never produced. originally 370/195 was targeted at national labs. and numerical intensive supercomputer type stuff. however, it started to see some uptake in the TPF market (transaction processing facility ... the renamed ACP ... airline control program ... which in addition to be being used in large airline res. systems ... was starting to see some deployment in large financial transaction networks, therefor its name change). the codes in this market segment was much more commercial oriented ... so a 2nd thread/i-stream might benefit the customers workload.

the high-end TPF financial transaction market was seeing some uptake of 195 as growth from 370/168 (195 even running commercial codes at half peak could still be around twice thruput of 168). a big stumbling block was that TPF (operating system) didn't have SMP support and didn't get SMP until after 3081 time-frame in the 80s (the 3081 product was smp only ... but eventually they were forced to produce a reduced priced, single-cpu 3083 for the TPF market).

the other issue was that 3033 eventually showed up on the scene which was nearly the thruput of half-peak 195 (about even on commercial codes).

for some folklore drift sjr was still running 370/195 and made it available to internal shops for numerical intensive operations. however, the batch backlog could be extremely long. Somebody at PASC claimed that they were getting 3month turn-arounds (they eventually setup a background and checkpoint process on their own 370/145 that would absorbe spare cycles offshift ... and started getting slightly better than 3 month turn-around for the same job).

one of the applications being run by the disk division on research's 195 was air-bearing simulation ... working out the details for floating disk heads. they were also getting relatively poor turn around.

the product test lab across the street in bldg. 15 got an early 3033 engineering machine for use in disk/processor/channel product test.

the disk engineering labs (bldg 14) and the product test labs (bldg 15) had been running all their testing "stand-alone" ... scheduled machine time for a single "testcell" at a time. the problem was that standard operating system tended to quickly fail in an environment with a lot of engineering devices being tested (MVS had a MTBF of 15 minutes in this environment).

I had undertaken to rewrite the i/o supervisor making operating system bullet proof so that they could concurrently test multiple testcells w/o requiring dedicated, scheduled stand-alone machine time (and w/o failing)
https://www.garlic.com/~lynn/subtopic.html#disk

when the 3033 went into bldg. 15, this operating system was up and running and heavy concurrent product test was consuming something under 4-5 percent of the processor. so we setup an environment to make use of these spare cpu cycles. one of the applications we setup to provide thousands of cpu hrs processing was air-bearing simulation (in support of working out details for floating disk heads).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Article: The True Value of Mainframe Security

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Article: The True Value of Mainframe Security
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 31 Aug 2005 12:42:22 -0600
jbaker314 writes:
"the first centralized security system, Customer Information Control System (CICS), was introduced on the S/390."

This is news to me. I always thought CICS was an online transaction system which was introduced on the S/370 or even earlier. JC, feel free to chime in.


I got to shoot bugs in beta-test version of cica on 360 in 1969.

the univ. library had a ONR grant for doing an automated library project ... and got selected to be an early cics betatest site for the implementation.

i specifically remember shooting a bdam open bug in cics. the original code had been developed at a customer site ... and the library was using some different bdam options than had been in the original environment.

random past bdam &/or cics posts
https://www.garlic.com/~lynn/submain.html#bdam

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Article: The True Value of Mainframe Security

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Article: The True Value of Mainframe Security
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 31 Aug 2005 13:21:28 -0600
jbaker314 writes:
"the first centralized security system, Customer Information Control System (CICS), was introduced on the S/390."

This is news to me. I always thought CICS was an online transaction system which was introduced on the S/370 or even earlier. JC, feel free to chime in.


there was some high integrity mainframe systems deployed in the 60s and 70s. among other things, theyy were being used for commercial timesharing services ... some number of these commercial timesharing services were selling into the financial market ... where there is implicit requirement to safeguard customers from each other.
https://www.garlic.com/~lynn/submain.html#timeshare

note also that there was some amount of use in various gov. agencies based on similar criteria; a couple recent posts related to the subject.
https://www.garlic.com/~lynn/2005k.html#30 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2005k.html#35 Determining processor status without IPIs

one scenario was at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

which was offering cms\apl services to business people in armonk corporate hdqtrs that had loaded the most sensitive of corporate secrets and were running various business modeling applications ... and concurrently there was access to the same system by students from various colleges and univs. in the boston area.

we had a few cases of denial of service mounted by students that were quickly handled ... but i know of no data breaches.

some of the DOS attacks that were attempted based on resource consumption. however, i had done dynamic adaptive resource scheduler as undergraduate in the 60s ... with the default policy being fair-share ... which tended to contain such efforts (and if they didn't, i had to find and fix any anomolies in the resource control implementation). the other DOS attacks used some structural characteristics that had to be quickly corrected.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Article: The True Value of Mainframe Security

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Article: The True Value of Mainframe Security
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 31 Aug 2005 13:32:53 -0600
Ted MacNEIL writes:
S/360 -- 1969 (or was it '67?)

I always wonder about the reliability of an article that has such glaring errors in it.


this reference will tend to also give the pre-product history
http://www.yelavich.com
note above cics references may be removed ...
https://web.archive.org/web/*/http://www.yelavich.com/
https://web.archive.org/web/20060325095507/http://www.yelavich.com/history/ev197003.htm
https://web.archive.org/web/20040203041346/www.yelavich.com/history/toc.htm

but as undergraduate in '69 i was shooting bugs in cics product beta-test that the univ. library project was involved in; previous post
https://www.garlic.com/~lynn/2005o.html#45 Article: The True Value of Mainframe Security

misc. past posts mentioning bdam and/or cics
https://www.garlic.com/~lynn/submain.html#bdam

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, index - home