List of Archived Posts

2001 Newsgroup Postings (03/21 - 04/20)

A future supercomputer
SSL question
"Bootstrap"
Invalid certificate on 'security' site.
A future supercomputer
Unix hard links
bunch of old RFCs recently went online
Invalid certificate on 'security' site.
Invalid certificate on 'security' site.
Invalid certificate on 'security' site.
Simpler technology
WCs Payment Processing
database (or b-tree) page sizes
on-card key generation for smart card
on-card key generation for smart card
on-card key generation for smart card
Verisign and Microsoft - oops
"Bootstrap"
Drawing entities
[Newbie] Authentication vs. Authorisation?
What is PKI?
What is PKI?
why the machine word size is in radix 8??
why the machine word size is in radix 8??
April Fools Day
Economic Factors on Automation
why the machine word size is in radix 8??
Imitation...
Very CISC Instuctions (Was: why the machine word size ...)
Economic Factors on Automation
Very CISC Instuctions (Was: why the machine word size ...)
Very CISC Instuctions (Was: why the machine word size ...)
Imitation...
Very CISC Instuctions (Was: why the machine word size ...)
Very CISC Instuctions (Was: why the machine word size ...)
Imitation...
solicit advice on purchase of digital certificate
Economic Factors on Automation
Flash and Content address memory
Economic Factors on Automation
Flash and Content address memory
solicit advice on purchase of digital certificate
IBM was/is: Imitation...
Economic Factors on Automation
IBM was/is: Imitation...
A beautiful morning in AFM.
anyone have digital certificates sample code
Just a guick remembrance of where we all came from
VTOC position
VTOC position
SSL certificate question...
OT Re: A beautiful morning in AFM.
OT Re: A beautiful morning in AFM.
April Fools Day
VM & VSE news
VM & VSE news
Pentium 4 Prefetch engine?
Impact of Internet
Very CISC Instuctions (Was: why the machine word size ...)
Pentium 4 Prefetch engine?
VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position )
OT Re: A beautiful morning in AFM.
Pentium 4 Prefetch engine?
VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position )
Pentium 4 Prefetch engine?
Pentium 4 Prefetch engine?
Pentium 4 Prefetch engine?
I/O contention
Block oriented I/O over IP
Pentium 4 Prefetch engine?
Pentium 4 Prefetch engine?
Pentium 4 Prefetch engine?
Rational basis for password policy?
Pentium 4 Prefetch engine?
April Fools Day
Changing IP addresses, was: Carrying authentication information between Web applications
Pentium 4 Prefetch engine?

A future supercomputer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A future supercomputer
Newsgroups: sci.crypt
Date: Wed, 21 Mar 2001 21:01:21 GMT
"JCA" <Jose_Castejon-Amenedo@hp.com> writes:
Let me turn your analogy upside down - in order to actually launch a rocket one must be able to build a fuselage first. But just having this skill without knowing the physical principles on which rockets are based will take one nowhere fast.

The same with raw computing power and the human brain. Humongous horsepower is probably a relatively minor part of the solution, and hence my belief that ASCI and Blue Gene are not likely to change things at all in this respect.


the counter argument is that huge amounts of excess, disposable resources result in all sorts of new innovation.

lots of innovation is going on with computers in the past ten years that wouldn't have happened in the 60s .... in large part because of the lack of computer resources. It isn't just a single supercomputer ... it is having lots & lots of them (i.e. the processing power of most PCs today are orders of magnitude larger than what was available in the 60s and a whole lot more numerous).

however, rate of innovation isn't necessarily linearly proportional to the huge amount excess disposalbe resources ... there is still a whole lot of brownian motion going on.

the ASCI stuff is just a lot of normal processors all ganged together

random refs:
http://www.garlic.com/~lynn/2001c.html#86
http://www.garlic.com/~lynn/2000d.html#2
http://www.garlic.com/~lynn/2000d.html#3
http://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

SSL question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL question
Newsgroups: sci.crypt
Date: Wed, 21 Mar 2001 22:06:41 GMT
Paul Rubin <phr-n2001@nightsong.com> writes:
Normally the definition of a client and a server is that the client initiates sessions. If you're saying you want a remote site to initiate a session on a user's PC, no problem. You install a server program on the user's PC and a client program on the remote site.

If you're just trying to periodically update data in a browser, then the usual way is to use the html meta tag to refresh the data once a minute, or else do something similar with javascript. That's completely independent of SSL and can be done with or without SSL.


note however, most server software (i.e. software that accepts connections from remote sources) are typically cleansed from personal machines since they frequently are avenues for exploits ... and most users aren't nominally sophisticated enuf to securely manage platforms containing software that accepts connections from remote clients. There is frequently also questions about client software that initiates sessions from a user's machine without direct end-user action.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

"Bootstrap"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Bootstrap"
Newsgroups: alt.folklore.computers
Date: Wed, 21 Mar 2001 22:09:58 GMT
dscheidt@tumbolia.com (David Scheidt) writes:
Boots still have them. They're the things on the back that you pull on when you're trying to get them on. There's a well know phrase about picking yourself up by the boot straps, which is probably hte immediate orgin of the computer usage.

when i was a kid ... i had a number of boots with the strap in the back. now all the ones i have ... have pairs of straps on the (inside) sides.

I remember when my kids were younger ... literally lifting them off the ground trying to pull their boots on by the bootstraps. Typically I would lean over and they would put their arm around my neck so they didn't fall over when they left the ground.

the problem i've got with some of the cowboy boots with the straps on the (inside) sides ... is some of them are sewn with very stiff nylon thread that abraids the skin ... unless you have particularly heavy socks. I always have to check the socks before putting on some of the boots.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Invalid certificate on 'security' site.

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Invalid certificate on 'security' site.
Newsgroups: alt.computer.security,comp.security,comp.security.misc
Date: Wed, 21 Mar 2001 23:05:45 GMT
"Spock" writes:
Most of the worlds big companies and governments have their own public key infrastructure (PKI) with a self-signed certificate at the root. You can generally trust certificates issued by the same certificate authority (CA) as your own, and sometimes CAs can be cross-certified to extend the range of certifiable trust. In this model you can actually follow a complete chain of certificates to prove beyond a doubt that the other end of your connection is who they say they are. These systems tend to support revocation, so you can also check that there has been no change in the trust model since the certificate was issued (i.e. the employee left the company and took their keys and certificates with them).

A weaker but more widespread trust model is implemented by storing the other party's CA certificate in your software and doing a partial verification. This is how the standard browsers work. The biggest problem with this model is lack of a full and current certificate chain. It is tricky to verify that the certificate you are storing in your browser is the right one and that it hasn't been altered since you stored it... but its an easy model to implement (and the browser comes ready to support it) so nobody pays much attention to the details.


put the whole original point of doing certificates at all was that it wasn't possible to directly contact the online authority to authenticate the information, certificates were invented in order to be able to do various kinds of authentication when running offline w/o recourse to direct connections (i.e. analogous to letters of credit credentials in sailing ship days).

the reason the weaker/browser model was implemented was what was what certificates were designed for ... being able to do authentication w/o having to resort to an online operation.

having direct online access for authentication information makes the use of certificates redundant and superfluous ... aka

1) it is weaker trust model to use certificates and be offline ...

2) it is a much weaker trust model to be offline and not have anyting

3) it is stronger trust model to be able to online authenticate the information

4) but the whole point of having a certificate was being able to do offline authentication when there wasn't online access for doing authentication

5) certificates and online may not quite be an oxymoron ... but it is definitly redundant and superfluous.

random refs:
http://www.garlic.com/~lynn/2001c.html#57

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

A future supercomputer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A future supercomputer
Newsgroups: sci.crypt
Date: Wed, 21 Mar 2001 23:16:45 GMT
"JCA" <Jose_Castejon-Amenedo@hp.com> writes:
Innovations that, by and large, people already knew how to do, but lacked the necessary resources. Which is not the case when it comes to artificially reproducing the capabilities of a human brain - not only we probably don't have the minimum resources for it yet but, far more crucially, we also don't have a clue how to begin to do it. The big huge amounts of computing power looming in the horizon are not likely to give us such clue on their own.

and the counter example is a lot of people doing aha's after viewing various digital visualization processes that weren't generally available (or available at all) 20 years ago ... not purely limited to how do computational operations occur ... but also about how lots of other things in the world happen. The other area is correlation and regression processing of huge amounts of data ... uncovering non-intuitive relationships between various cause and effect.

And while both of the above ... digital visualization and correlation & regression processing have been applied to large number of different areas of discovery ... they've also been used specifically in the area of brain research and activity (i.e. lots & lots of digital recording of brain physical operation ... and then being able to various sorts of analytical studies as well as digital visualization of the information).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Unix hard links

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Thu, 22 Mar 2001 00:46:59 GMT
Paul Repacholi writes:
But you can call a foofind_file(....) on any system with a suitable function underneath.

How would unix handle say a IBM Partitioned Data Set exported into its file system? What do you do with all the programs that 'know' what is or is not a legal file spec when the rules are changed. Run time, or compile time...

-- Paul Repacholi 1 Crescent Rd.,
+61 (08) 9257-1001 Kalamunda.
West Australia 6076
Raw, Cooked or Well-done, it's all half baked.


as an aside ... somebody in the early '70s wrote a vtoc, pds, & bdam emulator for CMS that allowed CMS to mount OS disks and access/operate on the data. If it can be done for CMS, it should be do'able also on other systems.

as to bdam ... a couple years ago ... we visited what is (was?) probably the largest online managed information service. It was originally designed and implemented using bdam in the late '60s and continues to run production today serving customers all over the world. there supposedly is something like 40,000 trained proferssionals around the world adept in looking up information (as well as available to a large number of other people).

the interesting thing is that they keep trying to figure out an implementation more modern and efficient than their late '60s bdam implementation ... and have yet to do it.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

bunch of old RFCs recently went online

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: bunch of old RFCs recently went online
Newsgroups: alt.folklore.computers
Date: Thu, 22 Mar 2001 15:49:36 GMT
bunch of old RFCs have just gone online in the past couple days

rfc22.txt rfc44.txt rfc91.txt rfc121.txt rfc128.txt rfc138.txt rfc160.txt rfc161.txt rfc162.txt rfc164.txt rfc166.txt rfc171.txt rfc184.txt rfc188.txt rfc189.txt rfc195.txt rfc225.txt rfc252.txt rfc255.txt rfc298.txt rfc300.txt rfc325.txt rfc343.txt rfc351.txt rfc353.txt rfc357.txt rfc367.txt rfc369.txt rfc378.txt rfc384.txt rfc392.txt

rfc384 is (aug. 1972)

OFFICIAL SITE IDENTS FOR ORGANIZATIONS IN THE ARPA NETWORK

http://tools.ietf.org/html/rfc384.txt

I know the LL-67 ... but not sure about the AMES-67. Lockheed (NASA?) had a special triplex 360/67 for manned orbital lab. project that was located in sunnyvale area.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Invalid certificate on 'security' site.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Invalid certificate on 'security' site.
Newsgroups: alt.computer.security,comp.security,comp.security.misc
Date: Thu, 22 Mar 2001 17:00:21 GMT
"Spock" writes:
I agree with everything you said except this last point. Certificates are valuable tools for identifying yourself and others, even when you are online. Being online gives you is access to extra information used during verification, but the other benefits of using certificates remain equally valid.

when you are online all you need is a public/private key pair ... the private key signs something (like an account transaction) ... it is sent off and the relying party looks up your account from the transaction ... and then verifies that your digital signature authenticates.

the purpose of a certificate was so that an offline relying party could take the embedded public key, verify the digital signature offline and then use the credential information in the certificate as being trusted.

revokation was via certificate revokation lists ... the original idea would that they would be distributed monthly.

the target design point was for offline email, but the paradigm was somewhat the offline credit card operation from the 50s, 60s ... etc. before it went online. The CRLs were the monthly paper booklets of invalid credit card numbers.

The credit card infrastructure went to online where the information from the magstripe is used to look up the real information and then with the access to the account information the standard business process is performed.

By comparison, the certificate contains (effectively) a (possibly stale) copy of the online information that was built into a manufactured certificate and certified at some time in the past. The purpose of the certificate ... is being able to rely on (stale) certified copy of the online information ... when operating in an offline mode.

The public/private key pair is used to authenticate ... the certificate is used for distribution of certified stale, static copies of some online data that can be used in an offline mode when there isn't access to online information.

The most trivial flavor of such a certificate ... is a relying-party-only certificate that only contains some sort of domain specific ID ... like an account number of employee number. These are typically used because of either liability (allow others to rely on the certified information opens an organization to liability), privacy (an identify certificate can represent serious exposure of unnecessary privacy information), and/or trust (the types of things that a business may be concerned about may not be something that some other orgnaization can certify).

In any case, relying-party-only certificates with only an id/account number, transactions related to the certificate typically have to access the related online record to obtain the up-to-date information of interest ... (in a financial situation, the current, real-time credit-limit and/or account balance, something that would get stale fast if placed in a certificate manufactured & certified at some time in the past). However, it is trivial to show that accessing the online record makes also carrying a read-only, stale copy of possibly a subset of that information in a certificate, redundant and superfluous.

The issue in a certificate is that a copy of some (possibly subset) information from some account/ID record has been placed in a manufactured certificate at some time in the past and certified by some trusted party. The purpose of creating that certificate is so that relying parties can achieve some level of comfort when they don't have online access to the original, current & up-to-date account/ID record.

For all intents and purposes, a certificate is an implementation of a trusted distributed R/O caching database system. There is a master of the information someplace (analogous to the internet Domain Name System that is used for mapping things like www.abc.com domain name to an internet IP address of the form xx.xx.xx.xx). In the case of certificates, the Certificate Authority has the original master information and it manufactures certified R/O copies of that information for distribution ... typically at some time in the past ... which means that the information can easily become stale and/or out-of-date.

The issue is that if the information changes very infrequently and has a relatively low value ... some entity can rely on the "local" copy w/o having to resort to the original online copy (especially compared to not having any information at all when offline).

Various practical business problems for certificates have cropped up.

Identity certificates ... i.e. name & address ... represents a privacy exposure.

Access control certificates ... putting actual access control information can represent a security exposure

3rd party certificates ... may not have access to any information that a business unit is interested in having certified by other parties.

General certificates ... business units may not be interested in certifying information that may be used by an unknown number of relying-parties which opens them to unknown amount of liability

Stale information ... business units typically are interested in the timely, online information contained in the original record.

So businesses have tended to migrate towards relying-party-only certificates (privacy, liability, trust, availibiilty of the information, etc). But, in effect, such instruments basically only carry an index to the original online record (rather than carrying the information in the certificate, it just contains a pointer to where the information actually exists).

Now for the redundant and superfluous part. Unless somebody is doing authentication totally independent of any business process (i.e. doing authentication just for the sake of doing authentication operation with no associated business purpose &/or reason), the operation consists of some sort of transaction (financial, session initiation, request for access, etc). The transaction contains some amount of information, including thing like an account number, employee id, userid, etc. That transaction is then digitally signed. The business unit then looks up the master record based on information in the transaction, once it has the master copy of the data, verify the digital signature. Also having a stale, static copy of the public key and some sort of master record identifier in an appended certificate that has been also digital signed (which also has to be verified ... along with all the other things in the trust model) is redundant and superfluous.

The public/private key digital signature is sufficient for providing authentication. An appended certificate credential is required for providing authentication & certified information in an offline environment when it is not possible to access the original, timely, uptodate information/record.

Certificates are perfectly fine when the business operation doesn't need online access to original, timely, & uptodate information. Business operations that need access to original, timely, & uptodate information ... typically have online protocols that give them access to original, timely & uptodate information. Online protocols that access the orignal, timely & uptodate information only for the purpose of determining if stale, static copies from the past can be trusted are somewhat contrived.

Further contrived, are the relying-party-only certificates that force an access to the original, timely, & uptodate information ... it is possible for a business unit to access the original, timely & uptodate information w/o including a relying-party-only certificate as part of the protocol.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Invalid certificate on 'security' site.

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Invalid certificate on 'security' site.
Newsgroups: alt.computer.security,comp.security,comp.security.misc
Date: Thu, 22 Mar 2001 17:14:17 GMT
alun+un@texis.com (Alun Jones) writes:
Of course, if your sole use for the certificate is to provide a public key in a portable manner, so that SSL connections can take place in an encrypted fashion, then there is not much difference between a self-signed certificate and one issued by any of the rather over-priced organisations that will offer to sign your certificates for you.

So, the question, then, comes back to whether you are using the certificate to identify the bearer, or merely to protect communications between you and the bearer.


and of course identity certificates are a huge privacy issue ... what set of information in a identity certificate is necessary to justify the cost of a intentity certificate PKI against what information shouldn't be included in an identity certificate (like name & address) because it creates a significant & unncessary privacy exposure for a signficiant percentage of business & financial applications.

And of course, one of my favorite scenerios is the server SSL domain name certificates. One of the justifications for server SSL domain name certificates (I claim represent 99.99999999% of the current world-wide certificate authenticatione events) is that the domain name infrastructure has various integrity weaknessses.

However, what authoritative agency do the CAs have to go to in order to authenticate a domain name as part of manufacturing a server SSL domain name certificate? The very same domain name infrastructure.

So the proposal for improving the integrity of the domain name infrastructure (so that the CAs can rely on it for validating domain name information so they can issue a a server SSL domian name certificate) is to have people register their public key when they register their domain name.

Now, that opens up two issues

1) if the domain name infrastructure integrity is improved so that the CAs can trust them, then it is likely that level of integrity is also sufficient for everybody else (mitigating the issue of why do people think there is a need for SSL domain name certificates).

2) if the domain name infrastrucure has a registered copy of the public key, the the domain name infrastructure has the option of distributing a real-time copy of the public key in the same process that does the hostname/domainname resolution to ip-address (i.e. rather than the client going thru the whole SSL certificate process to obtain stale information , the public key is obtained in real time in the same process that obtains the ip-address). SSL then can be modified to rather than being certificate based (with stale, static information) it can be real-time public key based.

i.e. SSL has two parts ... 1) the domain name authentication ... which can be done with a trusted domain name infrastructure and 2) session key exchange ... which can be done in a number of ways, including using the trusted public key supplied by the trusted domain name infrastructure.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Invalid certificate on 'security' site.

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Invalid certificate on 'security' site.
Newsgroups: alt.computer.security,comp.security,comp.security.misc
Date: Thu, 22 Mar 2001 17:40:07 GMT
"Spock" writes:
Most of the worlds big companies and governments have their own public key infrastructure (PKI) with a self-signed certificate at the root. You can generally trust certificates issued by the same certificate authority (CA)

also some discussion

GAO: Government faces obstacles in PKI security adoption<

at:
http://www.garlic.com/~lynn/aepay6.htm#gaopki
http://www.garlic.com/~lynn/aepay6.htm#gaopki2
http://www.garlic.com/~lynn/aepay6.htm#gaopki3
http://www.garlic.com/~lynn/aepay6.htm#gaopki4

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Simpler technology

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Simpler technology
Newsgroups: alt.folklore.computers
Date: Fri, 23 Mar 2001 13:46:26 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
In article <995rle$4s0@nfs0.sdrc.com>, Larry Jones wrote: > I've also heard tell that when Apple announced that they had used a Cray
> to design their latest computer (the Lisa, if memory serves), Seymour
> remarked that that was interesting as he had used an Apple (II, I think)
> to design the latest Cray.

c.s.s. FAQ

%A Marcelo A. Gumucio
%T CRI Corporate Report
%J Cray User Group 1988 Spring Proceedings
%C Minneapolis, MN
%D 1988
%P 23-28
%K 21st Meeting
%X Seymour has 6 Apple Macs (Macintosh) used to design Crays (not just one).
Q&A section.

[Gordon Bell {See the IBM panel} admits he designs his computers on Macs, too.]
[Edward Teller designs thermonuclear devices on a Mac.]

Lisa: wrong.
II: wrong, Mac.


I had a friend that did much of the programming of the cray for the human interface for the mac ... a lot of what he was doing was (over) driving the I/O to the frame buffer ... investigating a lot of human factors threshhold factors. Being able to operate the human interface at 10* or more faster than what it would nominal be ... allowed them to instrument a lot of things and vary a number of factors to see if any made much significant difference in the human performance.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

WCs Payment Processing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: WCs Payment Processing
Newsgroups: ibm.software.paytech
Date: Fri, 23 Mar 2001 16:55:37 GMT
Lance D Bader writes:
There are other options, but in this news group, we are pretty much committed to the WebSphere Payment Manager so we don't track them.

I know that there are off-line options and CyberCard options that come with WebSphere Commerece Suite. I also know that special engagement teams in the IBM Global Services division have developed other options. Of course, you could always develop an overridable function for the DoPayment task yourself.

Good luck,


is anybody in ibm looking at implementing support for the recently passed X9.59 retail payment protocol standard (it was designed for all electronic retail payments). standard is now available at the ANSI online publication store:
http://web.archive.org/web/20011215145141/http://webstore.ansi.org/ansidocstore/product.asp?sku=DSTU+X9.59-2000

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Fri, 23 Mar 2001 18:15:07 GMT
handleym@ricochet.net (Maynard Handley) writes:
Now what can be a problem is that the 256MB that a segment addresses may be too little for the amount of sharing one wants, and depending on exactly what one does in the VM/malloc interaction one may be limited to malloc()'d blocks of memory that are smaller than 256MB. Both of these are restrictions, but rather closer to the sorts of restictions one has as a consequence of being 32-bit than restrictions derived from not having enough segments.

the issue with the 801 ROMP/RIOS 16 segment regesters of 256MB segments was that it was designed for a totally different operating system environment/paradigm than the existing systems that currently use it (i.e. there was no run time separation between user & kernel space and inline code could arbritrarily change segment register values as easily as address values in general registers).

random URL
http://www.garlic.com/~lynn/95.html#5
http://www.garlic.com/~lynn/2001c.html#84

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

on-card key generation for smart card

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: on-card key generation for smart card
Newsgroups: sci.crypt
Date: Fri, 23 Mar 2001 19:36:24 GMT
Chenghuai Lu writes:
Could anybody tell me the average time of on-card 1024-bit RSA key generation for the best smartcard application.

Thanks.

-------------


for standard 3.?mhz 7816 chips ... i've seen times of 8minutes for 1024bit key generation.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

on-card key generation for smart card

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: on-card key generation for smart card
Newsgroups: sci.crypt
Date: Fri, 23 Mar 2001 22:20:42 GMT
Paul Rubin <phr-n2001@nightsong.com> writes:
Chenghuai Lu writes: Could anybody tell me the average time of on-card 1024-bit RSA key generation for the best smartcard application.

Thanks.

The cards I've been using can do it in under a minute, and I doubt those are the fastest. 8 minutes is ridiculous.


crypto accelerator are suppose to speed things up by a factor of 10 ... so that may be about right. there is also a big difference between 8bit chips and 16bit chips ... and what kind of random number generator is available in the card (I've heard of tests done on a lot of the 8bit cards where they are power-cycled several thousand times and the operation performed again and the results recorded ... and possibly 30% of the results on the same).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

on-card key generation for smart card

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: on-card key generation for smart card
Newsgroups: sci.crypt
Date: Sat, 24 Mar 2001 16:23:32 GMT
Daniel James writes:
I have done APDU-level work with some of GemPlus's RSA smartcards. Their GPK4000 card generates a 1024-bit keyset in 160 seconds 90% of the time - the remaining 10% of the time you get an "operation not complete" error code and have to start again. Their newer GPK8000 cards - which are said to perform the keygen on-card - typically generate a keyset in less than 10 seconds using GemPKCS (I've not had occasion to perform a keygen operation at APDU level, but I have examined the access control attributes on the key files and I don't think this is "faked").

in general, key gen is characteristic of the chip used ... and frequently it is hard to get the chip specifications from the card vendor ... sometimes because they may source chips from a number of different chip vendors for the same card ... and the chips may have different characteristics.

typically the issues are 8bit chip or 16bit chip ... or in some cases newer 32bit chips, the speed the chip is running at (although frequently it is 3.?Mhz, although newer chips are sometimes 10-15mhz), whether there is a crypto accelerator and what kind, and the quality of the random number generator.

the vast majority of smartcards in the market are 8bit chips, 3.?mhz, no crypto-accelerator, very poor random number quality and 8mins for 1024bit key-pair.

the circuit size of a 1024bit rsa crypto accelerator giving a 10 speedup has been on the order or larger of many of the 8bit chips in the market.

I don't believe i've seen any such accelerator in 8bit chips ... so it is a higher end, more expensive chip. Furthermore, for keygen it doesn't do much good unless there is a relatively decent random number generator ... which also makes the chip more expensive.

Now, one of the interesting things in the arena of authentication with public key digital signatures is the trade-off of RSA digital signatures vis-a-vis DSS digital signatures.

Effectively, RSA digital signatures have relied on a random nonce in the data being signed. Smartcards have tended towards RSA digital signature implementations because the PC or other unit creating the message can be relied on having a much better random number generator ... so that the random nonce is done as part of composing the message (rather than in the card as part of generating the signature).

One of the reasons that you tend to see fewer DSS-based smartcard implementations is that DSS requires the random number as part of the digital signature process (in much the same way, oncard quality random number is needed for oncard keygen, DSS also requires oncard quality random number for signature ... aka ... rather than relying on outside agency to insert random number in the body of the message, DSS incorporates the random number into the actual digital signing process). DSS signed messages can be 20bytes shorter (no random nonce) but the resulting signature is 20bytes longer.

The 8bit chips with external keygen, no crypto accelerator, poor quality random number, could implement digital signature authentication functions ... relying on external agency to reliably provide random number in the body of the message (and reliably offcard do original keygen).

Given a quality number source on the card (needed in any case for oncard keygen), DSS becomes much more practical and also reduces a possible attack where a card is fed messages that don't have the requisite random nonce.

Also, having a chip with a quality random number (sufficient for doing on-card keygen) could also be used to shift from a RSA-based signature to a DSS-based signature (minimizing card's integrity dependency on external sources).

And finally, EC-DSS with eliptical curve keys ... doesn't need the huge circuit area needed for 1024bit crypto-accelerator function i.e. if you have quality on-card random number generator sufficient for on-card keygen ... that also makes the card practical for DSS (& possibly minimizing infrastucture dependency on having external source provide card with messages incorporating random nonce), having quality random number for DSS, also enables EC-DSS ... which can eliminate the requirement for the large circuit area for the 1024bit crypto accelerator.

random url:
http://www.garlic.com/~lynn/aadsm2.htm#straw
http://www.garlic.com/~lynn/99.html#224
http://lists.commerce.net/archives/ansi-epay/199912/jpg00000.jpg
http://web.archive.org/web/20020228233550/http://lists.commerce.net/archives/ansi-epay/199912/jpg00000.jpg

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Verisign and Microsoft - oops

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Verisign and Microsoft - oops
Newsgroups: sci.crypt
Date: Sun, 25 Mar 2001 16:09:39 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
It takes more than $200 to ensure that I speak for rhyolite.com and that I am me. Consider the cases where someone would pay that cost plus the costs to operate the servers plus a profit to justify a $33/share price for a stock that lost $19/share and has a mysterious book value (at least to http://www.quicken.com/investments/stats/?symbol=VRSN ). Don't all of those cases have cheaper and more secure alternatives, such as exchanging keys in person?

one of the problems with TTP certificate manufacturing (term i coined several years ago to highlight the fact that a lot of the references to PKIs were really talking about just certificate manufacturing ... not about real infrastructure) is that the TTP has to cover the costs of a independent, stand-alone, complex, robust data processing complex and services purely out of fees charged for trust. Many business operations dealing in trust do it in conjunction with other types of operation (lot of cost-sharing across the infrastructure).

random refs:
http://www.garlic.com/~lynn/aepay2.htm#fed
http://www.garlic.com/~lynn/aadsm3.htm#kiss10
http://www.garlic.com/~lynn/aepay3.htm#openclose

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

"Bootstrap"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Bootstrap"
Newsgroups: alt.folklore.computers
Date: Sun, 25 Mar 2001 19:26:26 GMT
Anne & Lynn Wheeler writes:
the problem i've got with some of the cowboy boots with the straps on the (inside) sides ... is some of them are sewn with very stiff nylon thread that abraids the skin ... unless you have particularly heavy socks. I always have to check the socks before putting on some of the boots.

ok, at little OT ....

how many out there have boot-cut jeans?

how many have a boot-cut tux?

how many were married in a boot-cut tux (& boots)?

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Drawing entities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Drawing entities
Newsgroups: comp.arch,comp.arch.storage
Date: Sun, 25 Mar 2001 20:48:28 GMT
John Bayko <"jbayko "@sk.sympatico.ca> writes:
Jeff Epler wrote: On Wed, 21 Mar 2001 14:48:22 +0300, Maxim S. Shatskih wrote:

I despise the idea of having 3 entities (Display, Drawable and GC) to describe the single thing - the device I'm drawing on. GDI uses a _single_ HDC thing for this. Much more sane.

Surely they're separate entities.

If you want to display a crayon drawing on some refrigerator somewhere,

Display Refrigerator Drawable Sheet of paper GC Crayon

If the refrigerator is running Java, does it have automatic garbage collection?


you probably also need refrigerator magnets ... unless you eliminate the paper and use the crayon on the refrigerator ... part of magnetic storage.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

[Newbie] Authentication vs. Authorisation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Newbie] Authentication vs. Authorisation?
Newsgroups: comp.security.misc
Date: Thu, 29 Mar 2001 16:21:16 GMT
"Ian Graham" <egertona-a-a-remove_for_ real_address-a-a-a-agraham@sympatico.ca> writes:
You are right the card does not individually identify the user. But it does in a sense identify that you are a user that has paid to use the system. This may be false, in that the user may have found or stolen the card. But as a phone company you do not care, seeing as some one paid for the card. In this case identification and authorization are rolled into one inseparable package.

You can not have a useful authorisation scheme without some form of identification. Think of the example: The first person comes: you have no idea who this person is. How do you decide what access to give? Lets say you give them access to all the files. What happens when the second person comes? Either you give them the same access or you randomly select some other access level to be provided. There probalbly are a few niches were truly anonymous systems have their use, but in the vast majority of systems you have to have some way of the identifying the user, whether it be at the individual level or the group level.


a lot of times authentication is verifying is it the entity that is authorized to perform a specific operation ... like somebody that is using a valid phone card is presumably somebody that has paid for the phone card (& service), and therefor authorized to use the service.

identity ... like in identity certificates ... tend to have some set of characteristics that are independent of context ... like name, address, etc (tends to be independent of attributes of whether or not the entity is entitled to the service or function). especially in retail situations this represents serious privacy issues (i.e. push to remove names from payment cards ... so that electronic transactions are as anonymous as cash).

Because name/address/etc (identity) tend to be independent of whether or not the entity is actually entitled/authorized for the service/function (especially in retail and other settings) ... and there are technologies available for authenticating w/o having to identify, there is bigger & bigger pushes to eliminate such unnecessary compromises of privacy.

COnversely, part of the reason that identity theft is such an issue ... is the use of identity related information for use in making the implicit jump to assumed authorization ... w/o using technologies that more directly authenticate whether the entity is entitled to the service/function (harvesting identity information is sufficient for being able to fraudulently obtain access to services/functions).

Some of this goes back to various issues associated with 3-factor authentication: 1) something you have, 2) something you know, and 3) something you are. Identity theft is possible by harvesting identity related information and then being able to demonstrate it in something you know situations (a PIN may be unique something you know for accessing a specific service, but frequently "mother's maiden name" may surfice also ... which represents some generic identity-related information that can be more easily harvested).

Eliminating identity-related information for authentication 1) improves privacy and 2) minimizes the fraudulent benefits of harvesting such information.

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

What is PKI?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is PKI?
Newsgroups: comp.security.misc
Date: Tue, 27 Mar 2001 16:16:04 GMT
Peterson2 writes:
Dear all clever people,

Currently, I have a project to implement a simple client authentication mechanism using digital certificate. And the digital certificate must be stored in an external storage media to make it portable, e.g. floppy disk (to make it simple first). Some mechanism(e.g. timestamp) must be used to prevent replay attack. However, I am not very familiar with what happen during the authentication process. I have read several related articles and confused by the large amount of security-related technologies such as one-way hash function, digital certificate, digital signature, timestamp, encryption/decryption, private key.


Simple client authentication using public key typically has a client with a public/private key pair. The client composes a message ... with some information like userid and possbily something else ... and then digitally signs the message. The purpose of the digital signature is to determine whether or not the message was modified in transit and who originated the message. The message and the appended digital signature is transmitted to the relying party or server for authentication.

The issue in a PKI information is how to manage distributed public keys (the mechanism by which the server/relying party) reliably obtains the clients public key, that will be used in authenticating the client's digital signature (and therefor authenticating the client's transmitted message).

One of the ways of the server/relying party reliably obtains the client's public key is via a digital certificate created by a trusted third party. The trusted third party manufactures a digital certificate containing the client's public key along with some other information that is relivant to the particular situation and signs the digital certificate with the TTP's private key. The server/relying party has the TTP's public key on file in an account record someplace ... where the TTP's public key was obtained by some reliable process.

In this TTP/certificate authority mode of digital signatures, the client composes the message with some relavent information (like userid, account number, date/time, etc), digital signs the message with their private key, and then sends the message appended with the digital signature and the relavent digital certificate.

The server/relying party receives the combined message, verifies the digital certificate with the public key of the TTP/CA on file, extracts the public key from the certificate and verifies the signed message and then compares something in the signed message with something in the certificate as well as looking up some account record at the server having to do with the client (i.e. anybody in the world could send a client message to your server, correctly signed, with a valid digital signature ... and it might not still be a valid client, it just might be some random person someplace in the world, aka even after all the digital signature, TTP, certificate stuff ... there still has to be some indication that the request still corresponds to a valid client).

To make it a real PKI, the public keys still have to be managed ... i.e. whether the public key of the client is still acceptable, the public key of the certificate authority is still acceptable, the particular client is still acceptable, etc. The majority of the CAs actually aren't PKIs ... in the sense they actually don't provide for management of the public keys in the infrastructure ... they purely perform the role of digital signature manufacturing.

A simpler PKI for managing public keys is something that I refer to as account authority digital signature or AADS (as opposed to CADS or certification authority digital signature). This can be implemented in conjunction with something like RADIUS (there was a demo of an AADS RADIUS at PC/EXPO a couple years ago in NYC).

Possibly 99.99999% of client authentication that goes on around the world today involves RADIUS ... usually in userid/password form. However, standard radius stupports other forms of authentication and it is relatively straight-forward to modify RADIUS to support account authority digital signatures.

In a typical RADIUS scenerio, your ISP registers your userid and your selected password for valid clients. They manage the authentication material as to valid clients, valid passwords, etc. In the AADS scenerio, an account may be flagged as having a public key registered instead of a password. The same administrative interface for managing valid userids and valid passwords is then available for managing clients, userids, and public keys (aka a real PKI in that it has real administrtive support for management of public keys as opposed to simple certificate manufacturing).

In this AADS RADIUS scenerio, the client creates a message with userid, date/time, etc, digital signs the message with the client's private key and transmits the message with the appended digital signature (and no certificate) to the server. The server pulls the userid out of the message, requests the corresponding information from RADIUS, using the client's public key returned from RADIUS, validates the client's digital signature ... and it includes real key management support for a PKI.

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

What is PKI?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is PKI?
Newsgroups: comp.security.misc
Date: Wed, 28 Mar 2001 00:07:15 GMT
Peterson2 writes:
Don't suggest me to use some existing authentication products as this project is to pratice the use of different security-related technologies (mentioned above).

note that the advantage of the RADIUS infrastructure is that it already has support for doing different kinds of authentication on an account by account basis. Adding digital signature authentication (and whatever possibly other authentication mechanisms that you are interested in) would allow them to all co-exist within the same administrative infrastructure ... and specifically select the mechanism on an account by account basis.

While RADIUS has been primarily used by ISPs for initial connection authentication ... it is a generalized IETF standard and could also be supported by webservers and any number of other infrastructures for managing authentication.

Typically web servers have stub interface for implementing client authentication. Frequently this has been done with local RYO software that accesses a "flat" userid/password (or account/password) file. A much better exercise would be to implement a webserver client authentication using the RADIUS protocol and then an operation could manage all of their authentication requirements within the same general framework ... allowing password, digital signature and other forms of authentication all to co-exist simultaneously side-by-side (within the same administrative infrastructure).

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

why the machine word size is in radix 8??

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why the machine word size is in radix 8??
Newsgroups: alt.folklore.computers
Date: Sat, 31 Mar 2001 15:04:59 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Yes, I know, I have some 370 PoP manuals too. When I posted that, the thought was - did the EDMK actually make it into microcode for the lower-end 360's? Or was it simulated (like the extended precision divide on the model 85, IIRC)? I once studied ED and EDMK closely and found it hard to believe that they were programmed at the gate level. Of course, when a company has a workforce of sufficient size ...

it was in at least 360/30, 360/50, and 360/65/67 that I used (& I have no reason to believe it wasn't on the 360/40).

65/67 & below, all the machines were microcoded and that made it relatively easy to implement such instructions ... it was the high-end machines ... 75 and above that tended not to have the full compliment of instructions and could require software trap/simulation (modulo the 360/44).

the lowerer end machines tended to be more commercial where cobol, ED, EDMK, decimal instructions, etc ... were more significant. The higher end machines ... 75 and above that tended to be more numerical intensive and tended to short-change some of the decimal & related instructions.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

why the machine word size is in radix 8??

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why the machine word size is in radix 8??
Newsgroups: alt.folklore.computers
Date: Sat, 31 Mar 2001 15:24:09 GMT
Anne & Lynn Wheeler writes:
it was in at least 360/30, 360/50, and 360/65/67 that I used (& I have no reason to believe it wasn't on the 360/40).

this included using ED and TRT and various decimal instructions in a "monitor" that I wrote as an undergraduate and ran on 360/30 (there was no software running in the machine other than what was booted in my monitor).

For another undergraduate activity, I had available all the source of an IBM operating system for the 360/67 and I extensively modified and rewrote major sections.

The only low-level trap software in that software that took a "PROG1" exception from the kernel (i.e. undefined instruction interrupt) and performed simulation was for the SLT RPQ instruction defined by Lincoln Labs. Lincoln Labs had defined a search list hardware instruction that was available as a special RPQ ... some version of the operating system kernel were modified to use the instruction and a simulator was provided for machines that hadn't installed the RPQ).

random refs:
http://www.garlic.com/~lynn/94.html#2
http://www.garlic.com/~lynn/2000d.html#47
http://www.garlic.com/~lynn/2001c.html#15

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

April Fools Day

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: April Fools Day
Newsgroups: alt.folklore.computers
Date: Sat, 31 Mar 2001 15:37:01 GMT
jones@cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
Excellent point. I have saved usenet postings that I made back in the 1950's, however.

(The time and date clock battery in my machine died, and when I replaced it, the date and time of day came up rather far in the past. So, I immediately posted some messages to Usenet, then set about correcting the problem. I'm a bit surprised that UNIX is quite willing to believe that it's running that long ago.)


approx. 1980, i once accidently specified a master nickname file as a mailing list and sent an email to something like 27,000 people.

random refs:
http://www.garlic.com/~lynn/internet.htm#0
http://www.garlic.com/~lynn/internet.htm#22
http://www.garlic.com/~lynn/2000c.html#46

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/ g

Economic Factors on Automation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Economic Factors on Automation
Newsgroups: comp.robotics.misc,comp.ai.philosophy,sci.econ,alt.folklore.computers
Date: Sat, 31 Mar 2001 22:30:09 GMT
Edward Flaherty writes:
More automation almost always creates more jobs than it destroys, at least in the aggregate. The US economy is more automated today compared to 1900 by a factor of -- I can't count that high -- yet there are clearly lots and lots of jobs. The microeconomics works something like this:

i think that it used to be that something like 6 out of 7 people were involved in (barely enuf) food production ... then 1 in 7 and now possibly 1 in 49 ... mostly due to mechanical stuff.

In that sense a lot of people have lost their jobs in the most basic forms of food production ... and at the same time there is significantly more food production.

I can't say that less people starve-to-death ... there have been some numbers that indicate that the human tendency is to always produce more people than there is food ... if there is significantly more food ... it just takes longer for there to be more people than there is food supply (resulting eventually in much larger number of people starving to death). The analogy is computer programs evolve to consume all available (hardware) resources.

In any case, most of the nearly 40+ out of 49 people that use to be involved in basic day-to-day food production seem to now being doing something else.

More recently, there have been more (relatively) short-term dislocations. The 80%-99% of the population that were dedicated to food-production for the past hundreds/thousands of years ... obviously had to learn some other occupation. Some of the more recent (industrial) occupations that might have only spanned tens of years (rather than thousands) would have necessitated retraining programs within generations (rather than across generations) ... aka employment obsolescence and corresponding retraining has more individual impact if it is occuring to same individuals within generations rather than different individuals across generations.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

why the machine word size is in radix 8??

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why the machine word size is in radix 8??
Newsgroups: alt.folklore.computers
Date: Sun, 01 Apr 2001 00:25:30 GMT
Charles Richmond writes:
Also, IMHO, there were some mighty good designers that created the 360/370 instruction set. Sure, there may have been some decisions that you might not like (i.e., literal table, no immediate instructions), but the instruction set fit together pretty well. In hindsight, I am amazed that anyone would build a computer without hardware stack support for subroutine linkage, but go figure... Business seemed to eat up the 360/370 family and like it.

lot of 360 programs were non-reentrant and just set aside static area internal to their program for "stack" for use by called subroutines. The stack was then the thread thru these internally allocated dataspaces. for re-entrant routines ... they would incur the additional overhead for dynamically allocating space for the "stack-space use" for routines that they called (i.e. they would allocate the space at entry and would utilize the same space across all routines that they might happen to call).

the "owners" of the 360/370 instruction architecture enforced a very strong disapline across the company with regard to consistency, applicability, useability, and justification.

for instance, in order to get compare&swap into the architecture they required that a paradigm be invented for compare&swap that made it applicable to uniprocessor operation as well as multiprocessor operation. That resulted in the paradigm compare&swap definition for multi-threaded/tasking critical code sections (even when running in single processor configurations).

random refs (compare&swap was chosen because the mnemonic was the person's initials primarily responsible for the instruction):

http://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/93.html#14 S/360 addressing
http://www.garlic.com/~lynn/93.html#22 Assembly language program for RS600 for mutual exclusion
http://www.garlic.com/~lynn/94.html#02 Register to Memory Swap
http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#45 SMP, Spin Locks and Serialized Access
http://www.garlic.com/~lynn/95.html#8a atomic load/store, esp. multi-CPU
http://www.garlic.com/~lynn/97.html#10 HELP! Chronology of word-processing
http://www.garlic.com/~lynn/97.html#19 Why Mainframes?
http://www.garlic.com/~lynn/98.html#16 S/360 operating systems geneaology
http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
http://www.garlic.com/~lynn/98.html#8 Old Vintage Operating Systems
http://www.garlic.com/~lynn/99.html#176 S/360 history
http://www.garlic.com/~lynn/99.html#203 Non-blocking synch
http://www.garlic.com/~lynn/99.html#88 FIne-grained locking
http://www.garlic.com/~lynn/99.html#89 FIne-grained locking
http://www.garlic.com/~lynn/2000.html#80 Atomic operations ?
http://www.garlic.com/~lynn/2000e.html#4 Ridiculous
http://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000e.html#25 Test and Set: Which architectures have indivisible instructions?
http://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#32 Multitasking and resource sharing
http://www.garlic.com/~lynn/2001b.html#33 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#40 John Mashey's greatest hits

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Imitation...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Imitation...
Newsgroups: alt.folklore.computers
Date: Sun, 01 Apr 2001 00:33:08 GMT
jeffreyb@gwu.edu (Jeffrey Boulier) writes:
Motorola and Apple both sold clones of IBM's RS/6000 systems.

as well as wang and bull.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Sun, 01 Apr 2001 17:24:44 GMT
Charles Richmond writes:
The IBM 370 instructions EDMK, ED, TR, TRT, and the memory move instructions were all very "CISC-ky" in nature... One could also argue that the pack decimal arithmetic instructions for the IBM 370 were very "CISC-ky". In my limited experience, these are the more "CISC-ky" instructions that I have found...

So are there even more "CISC-ky" instructions around??? I have heard that the VAX architecture had some...are they the best examples of CISC???

What is the most "CISC" instruction that you have found???


the original 360/360 start input/output instruction (which was defined to initiate asynchronous operation of complex sequence of processor activity)

woodrum's tree instructions for 360/370 descendent

sorting instructions
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.7
A.7.1 Tree Format

Two instructions, COMPARE AND FORM CODEWORD and UPDATE TREE, refer to a tree -- a data structure with a specific format. A tree consists of some number (always odd) of consecutively numbered nodes. Node 1 is the root of the tree. Every node except the root has one parent node in the same tree. Every parent node has two son nodes. Every even-numbered node is the leftson of its parent node, and every odd-numbered node (except node 1) is the rightson of its parent node. Division by two (ignoring remainder) of the node number gives the parent node number. Nodes with sons are also called internal nodes, and nodes without sons are called terminal nodes. Figure A-5 illustrates schematically a 21-node tree with arrows drawn from each parent node to each son node.


the whole set of authorization related instructions in the descendents of 360/370.

The original 360/370 had SVC ... supervisor call instruction with a numeric parameter that interrupted into the kernel and branched into some service based on the numeric paramemter. This required a huge processing overhead ... but eventualy evolved into a strong domain seperation between non-privileged (aka "problem" state) mode and privileged (aka "supervisor" state) mode.

A lot of 360/370 operating system services were provided by library routines that the application would call with a simple branch&link.

The authorization infrastructure was to allow some level of granular privilege levels for system services that could be defined such that application programs could call the library system services with nearly the overhead of simple branch&link call but yet providing transition to/from different privilege levels. Things include authorization access to multiple address spaces (i.e. possibly at least different address space for the system services and application program).
5.7.1 Summary

These major functions are provided:

A maximum of 16 address spaces, including the instruction space, for immediate and simultaneous use by a semiprivileged program; the address spaces are specified by 16 new registers called access registers.

Instructions for examining and changing the contents of the access registers.

In addition, control and authority mechanisms are incorporated to control these functions.


access registers:
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.5

misc. related
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.4
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.5

the relatively recent introduction of linkage-stack to 360/370 decendents

http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.10
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.11
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.12

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Economic Factors on Automation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Economic Factors on Automation
Newsgroups: comp.robotics.misc,comp.ai.philosophy,sci.econ,alt.folklore.computers
Date: Sun, 01 Apr 2001 19:14:40 GMT
jeffreyb@gwu.edu (Jeffrey Boulier) writes:
While still beloved by some in the green crowd, just about everyone in economics seems to have given up on Malthus.

Consider this counter example: In much of the first world the number of children produced per woman is not too far above the replacement rate, and IIRC in Japan and parts of Europe it has fallen below replacement. In the developing world, population growth is similarly falling while the amount of food available is rising.


apparently they forgot to tell the rest of the world and while people weren't looking it went from 3billion to over 6.1billion last year.

It is possible that the societies involving stable replacement rates account for much more than 10-15% of the total population.

united nations site on world population trends:


http://www.undp.org/popin/wdtrends/wdtrends.htm
http://web.archive.org/web/20010801215639/http://www.undp.org/popin/wdtrends/wdtrends.htm

revised 2000 year report

http://www.un.org/esa/population/wpp2000h.pdf

...
world population rached 6.1 billion in mid-2000 and is currently growing at an annual rate of 1.2% or 77 million people per year. Six countries account for half the annual growth.
...

basically predicting something like 10billion people.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Mon, 02 Apr 2001 01:35:26 GMT
nospam@nowhere.com (Steve Myers) writes:
I never thought of the SVC instruction as being very RISCy. Really, what did it do in S/360?

- It stored the PSW, including the 8 bit operand of the instruction, in the defined location for an SVC interrupt. - It loaded a new PSW from the defined location for the SVC new interrupt PSW.


almost all loads of new 360/370 PSW were a very lengthy process because serialization and syncronization of various things under control of PSW bits were involved thruout the machine (both machine interrupts involving new PSW load as well as the load PSW instruction) pending interrupts & conditions, switch from virtual address mode to real address mode, etc.

however, the amount of information SVC communicated to the kernel as to the requested service was minimal and so also involved a lot of kernel processing.

I believe that work on program call & access register architecture stuff first started in the late '70s ... in part getting library & misc. subsystem stuff out of address space of the application while nearly preserving the efficiencies of branch & link subroutine call ... as well as some additional levels of privlege control (w/o having to do various/full switch thru the kernel).

Some amount of the Unix-like stuff with different applications in different address spaces chained together with pipes and message passing ... is do'able with program call & access register stuff (i.e. subroutine linkage/call to application in another address space w/o having to go thru the kernel).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Mon, 02 Apr 2001 03:51:42 GMT
Charles Richmond writes:
I recall another very CISC instruction:

In the 1980's there was a computer company that built machines tailored for UNIX and C. The company was called HSC, and it was bought out by Harris Computer before Harris started making their own UNIX boxen called Nighthawks. (Anyone know what the initials HSC stood for???)

Anyways, the HSC computers had a machine instruction that did what strlen() did for you in C. This instruction would search memory beginning at the address passed to it, until it found a zero byte. It would then return the length of the string (in a register, IIRC). Thus each call to strlen() compiled inline to a single instruction.


note the 360 TRT instuction ... you passed it pointer to a string and a 256byte table ... each byte in the string was used to index the table and if the table entry for that byte was non-zero, the operation would stop pointing at the byte. It could be used to search for not just hex zero ... but any byte value.

http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.97


TRT    D1(L,B1),D2(B2)         [SS]
 ________ ________ ____ _/__ ____ _/__
|  'DD'  |    L   | B1 | D1 | B2 | D2 |
|________|________|____|_/__|____|_/__|
0         8       16   20   32   36  47

The  bytes  of the first operand are used as eight-bit arguments to select
function bytes from a list designated by the second-operand address.   The
first  nonzero  function  byte  is inserted in general register 2, and the
related argument address in general register 1.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Imitation...

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Imitation...
Newsgroups: alt.folklore.computers
Date: Tue, 03 Apr 2001 14:50:50 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Scientific Computer Systems Supertek (brought by CRI) and one other firm who's name escapes me, were all instruction set compatible.

report on superminis from '88 (but doesn't list exact cray compatiblity)

http://www.garlic.com/~lynn/2001b.html#56


Alliant        171
Celerity just shipping
Convex         200
ELXSI           80
FPS            365
Gould            6
Multiflow        5
Scientific      25
  Computing
Supertek    not shipping yet

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Tue, 03 Apr 2001 14:54:46 GMT
cjt & trefoil writes:
My recollection is that the IBM 360 at the University of Michigan in the late '60's had a custom instruction (i.e. "bespoke" as those in the U.K. might say) for traversing linked lists.

Lincoln Labs. Search List (SLT) RPQ for the 360/67

random refs:
http://www.garlic.com/~lynn/2001d.html#23
http://www.garlic.com/~lynn/2001c.html#15

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Tue, 03 Apr 2001 15:00:05 GMT
"GRIMBLE GRUMBLE" writes:
ISTR that Perkin Elmer's clone of the IBM instruction set had similar capabilities.

note that Perkein Elmer's clone was Interdata had been bought be P-E

random refs:
http://www.garlic.com/~lynn/2000c.html#36
http://www.garlic.com/~lynn/2000c.html#37
http://www.garlic.com/~lynn/96.html#29
http://www.garlic.com/~lynn/96.html#30
http://www.garlic.com/~lynn/99.html#12

Interdata was one of the early non-PDP ports of unix.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Imitation...

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Imitation...
Newsgroups: alt.folklore.computers
Date: Tue, 03 Apr 2001 15:15:06 GMT
Charles Richmond writes:
Oh yeah, one more IBM 370 clone...the Amdahl machines. Well, maybe they would not count as clones, since George Amdahl was a designer of the IBM 370. Hmmm....that's an interesting question. Is it a clone if it was built for a different company, but by the original designer???

and I got to do the first 360 controller clone ... building a 360 channel attach board for an Interdata ... and programming the Interdata to emulate a 360 (I/O) control unit. It originated the 360 PCM (plug compatible manufactur) business (for controllers) ... before Gene did PCM mainframe.

random ref:
http://www.garlic.com/~lynn/96.html#30

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

solicit advice on purchase of digital certificate

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: solicit advice on purchase of digital certificate
Newsgroups: comp.security.unix
Date: Tue, 03 Apr 2001 15:35:41 GMT
Christer Palm writes:
If the common name in the certificate does not match the host name, the browser _is_ going to alert you about this fact.

However, I'm not talking about "going to the wrong site". I'm talking about beeing able to trust that the organization stated in the certificate is in fact the organization I'm talking to, and about guarding against man-in-the-middle attacks.


note the issues include compensating for weaknesses associated with the domain name infrastructure ... thinking you are going to www.xyz.com and getting directed to someplace else entirely.

the problem is that when somebody goes to one of the certification authorities to get a domain name certificate ... the certification authorities have to contact some authoritative organization as to the validity of the owner of the domain name ... which is the same domain name infrastructure that has everybody worried about getting certificates to compensate for.

in part, for the benefit of the certification authorities, there are some integrity proposals for the domain name infrastructure which involve a domain name owner registering a public key at the same time they register their domain name.

The issue for the certification authorities, is if the domain name infrastructure is strengthen for their purposes ... it actually gets strengthen for everybody's purposes (i.e. less chance that when you want to go to www.xyz.com that you ever go any place else). A trusted domain name infrastructure for the use of certification authorities also pretty much negates the reasons that domain name certificates exist.

The other issue is that if the domain name owner registers their public key at the same time they register their domain name ... it is now possible for the domain name infrastructure to serve up the public key effectively using the same mechanism that is in place today for serving up ip-addresses (real time serving both trusted ip-address as well as trusted public keys ... w/o having to resort to certificates).

Also, an SSL/TLS implementation based on domain name infrastructure serving up trusted public keys would be a lot more efficient that the current certificate-based mechanism for serving up public keys.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Economic Factors on Automation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Economic Factors on Automation
Newsgroups: comp.robotics.misc,comp.ai.philosophy,sci.econ,alt.folklore.computers
Date: Tue, 03 Apr 2001 18:36:58 GMT
Ian Stirling writes:
Putting down biosphere II's edge to edge gets you 200 billion. Low trillions can be done using optimised plants. 15 trillion probably requires sunshade over the earth, to reduce heat input to keep it cool.

there seems to be two separate arguments used to support the point about not reaching limit(s) ... either there is 1) a belief that there is no (practical) limit and population can grow unchecked & as fast as possible (nothing wrong with increasing current world population growth from existing 77million/annum to possibly increase of 500million/annum) ... or there is 2) a belief that there are some practical limits and govs. have encouraged zero population growth (references to population growth "1st world" situations).

while both points argue that world population isn't likely to exceed available resource limits ... they differ significantly with regard to whether there are practical, relatively near term, resource limits that could significantly affect avg. standard of living of the world wide population.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Flash and Content address memory

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flash and Content address memory
Newsgroups: comp.arch
Date: Tue, 03 Apr 2001 18:58:42 GMT
egor writes:
I have to write a report a uni about the 2 memory types above covering applications and device manufacturing.

Applications for the flash is obvious but I cant find anything on the manufacturing and as for the CAM I cant find anything at all Anthony


slightly related
The SNAP-1 Parallel AI Prototype, 1991 Proc. ACM
SIGARCH, DeMara, R.F. and Moldavan, D.J.

IXM2: A Parallel Associative Processor, 1991 Proc. ACM
SIGARCH, Higuchi T., Furuya T., Handa K., Takahashi N.,
Nishiyama N., and Kokubu A.


random refs (from alta vista):
http://www.pcs.cnu.edu/~rhodson/cam/camPage.html
http://www.ednmag.com/ednmag/reg/1996/050996/10df4.htm
http://www.openskytech.com/ContentAddressableMemory.htm
http://www.ieee.org/web/developers/webthes/00000493.htm
http://web.archive.org/web/20020220013230/http://www.ieee.org/web/developers/webthes/00000493.htm>

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Economic Factors on Automation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Economic Factors on Automation
Newsgroups: comp.robotics.misc,comp.ai.philosophy,sci.econ,alt.folklore.computers
Date: Wed, 04 Apr 2001 11:37:43 GMT
Grinch writes:
All the silliness about "exponential population growth" overlooks the obvious fact that all the growth in population you talk about has accompanied declining birth rates and resulted entirely from the increased longevity that's accompanied the increase in wealth since the Industrial Revolution -- before which general life expectancy was about 25. But, alas for us all, growth in life expectancy seems to be capped.

the UN world population trends URL that i posted earlier in this thread says that the world population reached 6.1billion last year and the current world growth is 77million/year ... and that 1/2 that growth rate is occuring in 6 countries (with some prediction for 10+billion people). as i mentioned previously, the total population of societies with stable population may represent only 15% of the total current world population i.e. zpg rates, regardless of the number of countries represented, may only represent 15% of the world population, that possibly 85% of the world population still has greater than zpg rates. Such a 15%/85% split may also not be all that different from past rates in sub-cultures within the same society.

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

Flash and Content address memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flash and Content address memory
Newsgroups: comp.arch
Date: Thu, 05 Apr 2001 11:32:05 GMT
egor writes:
How do you find this stuff???

I looked here but found nothing useful, what did you search for??


http://www.altavista.com/

+content +addressable +memory

5900 pages found

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

solicit advice on purchase of digital certificate

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: solicit advice on purchase of digital certificate
Newsgroups: comp.security.unix
Date: Wed, 04 Apr 2001 12:08:33 GMT
Christer Palm writes:
> Right - the classic chicken-and-egg problem...
>
> The risk involved is a little dependent on which class of certificate > Only when issuing the lowest class certificates, CA's rely upon
> information from DNS alone.


however, the overall infrastructure has had incidents of domain name hijacking at the regestration process ... all the ultimate authority is the domain name registration process regardless of how many other places you check i.e. not just the domain name system software for serving ip-address but all the way back up into the registration process, the registration of public keys with the registration of domain names is one suggestion for preventing domain name hijacking.

Since the domain name infrastructure ... back up into the registration authority ... is the ultimate authoritative reference for domain name ownership ... it is possible to check with as many places as you want ... and they still all have to refer back to the authoritative agency.

random refs
http://www.garlic.com/~lynn/aepay4.htm#dnsinteg1
http://www.garlic.com/~lynn/aepay4.htm#dnsinteg2
http://www.garlic.com/~lynn/2000e.html#38
http://www.garlic.com/~lynn/aadsmore.htm#seecurevid

> I guess you are talking about DNSSEC?
> Yes that is a very interesting initiative.
>
> True, although most certificates used by e-business websites are not
> simple domain-name certificates, but also has the organization identity
> and address verified and filed.


however, there is no "protocol" that cross-checks any of the other information, given a domain name hijack, anything could be put in the rest of the fields (as far as requesting a certificate ... and all of it could be perfectly valid and also useless).

> Right. This, however, effectively makes the DNS operators into "CA's"
> that needs to be credable enough to be generally trusted if the purpose
> would be served.
> Many DNS operators may not be ready to mantle such a responsibility.


then merchants could choose to register with ones that are ... for at least the problem of domain name hijacking

> Yes, given that they will also store and make available verified
> information about the domain owners together with the keys.
>
> Unfortunately, I guess this will not happen overnight.
> Here in Sweden, as well in some other countries, the government are
> currently funding investigations on how they could implement a national
> DNSSEC infrastructure to meet these goals.

the basic problem is that the top of the domain name system hierarchy is the domain name registration ... which is the ultimate authoratative agency for domain name ownership ... if the domain name is hijacked there, a CA could check with thousands of other agencies ... but they would still all have to rely on the domain name authoritative agency as to the owner of the domain name.

however, fixing even the domain name hijacking problem with registering public keys with the domain name ... puts the public key in real time database with the domain name ... enabling it to be served along with any of the other real time information supported by DNS ... including ip-address.

the current merchant certificate stuff didn't happen overnight either ... see first two references below

random urls
http://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59 ... fyi</a>
http://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi</a>
http://www.garlic.com/~lynn/2001c.html#8 Server authentication
http://www.garlic.com/~lynn/2001c.html#9 Server authentication
http://www.garlic.com/~lynn/2000c.html#32 Request for review of "secure" storage scheme
http://www.garlic.com/~lynn/2000e.html#50 Why trust root CAs ?
http://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
http://www.garlic.com/~lynn/2001c.html#62 SSL weaknesses
http://www.garlic.com/~lynn/2001d.html#8 Invalid certificate on 'security' site.
http://www.garlic.com/~lynn/aadsm3.htm#kiss5 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aadsm3.htm#kiss7 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aadsm4.htm#2 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aadsm4.htm#3 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aadsm4.htm#4 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aadsm4.htm#8 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aepay3.htm#openclose open CADS and closed AADS
http://www.garlic.com/~lynn/aepay3.htm#votec (my) long winded observations regarding X9.59 &amp; XML, encryption and certificates
http://www.garlic.com/~lynn/aepay6.htm#gaopki4 GAO: Government faces obstacles in PKI security adoption

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

IBM was/is: Imitation...

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was/is: Imitation...
Newsgroups: alt.folklore.computers
Date: Sun, 08 Apr 2001 15:36:55 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Well the problem is that the firm had realized to some degree that the PC market was a different thing for them. Lynn chime in here if you want. I like the way one of the ex-IBMers noted that it would take the company 2 year just to get an empty box out of the firm.

i do think that there was an awareness that the PC product competition was structured differently than the mainframe market. I remember in the mid '70s having a project canceled because it couldn't demonstrate $10b over 5years (i.e. mininum product revenue requirements was supposedly avg. $2b per year for 5 years). The standard process also included lots of review, administrative and business infrastructure on the assumption that products with min. $10b revenues over long period of time needed certain min. amount of business process (there was some joke about a new product needed something like 470 executive sign-off signatures from around the company and any one of the executives could non-concur).

A trivial example was some situation involving trade secrets and something about the amount of security needed to be proportional to the perceived value ... something valued at >$10b had to have significantly more security than something valued at $10m (otherwise it fell in some category about swimming pools being attractive nuisance). However most times you couldn't really predict which would be $10m and which would be >$10b .... so the $10b+ security had to be applied to everything from the start.

The idea was born of IBU (independent business unit) that was suppose to be free from all the normal business processes. One downside was IBUs being hosted on existing plant facilities. I remember some argument between some IBU with a plant manager asserting that all the members of the IBU had to observe a whole lot of business processes & practices. The counter claim was that this was an IBU and was free of all the standard business processes & practices. The plant manager's reply was an IBU might be free of a lot of other business processes & practices but not his (and it was difficult to find some business process owner that believed it was their processes that an IBU didn't have to follow).

There was also a case of a product using a different component from another plant. As part of getting interoperability, there was a desire to make that component available to outside companies. There was a rule of thumb about price markup (whether an external sale or internal transfer). In order to deliver this specific component to outside corporations, it had to pass through several business units, each one expecting to apply the markup guideline. Final component delivery was going to have over a 1000% markup.

There was the joke about the NSF evaluation of the backbone Anne & I was running ... where there was something about what we had was at least five years ahead of bid proposals to build something new for NSFNET .... that it takes at least five years (going to effectively infinity for some things) for new technology to make it through all the processes and out the door.

random ref:
http://www.garlic.com/~lynn/internet.htm#0

some thread-drift ... is it imitation or offspring

a pc networking company in provo
http://www.garlic.com/~lynn/2000g.html#40

.... the 229-3174 360/67 "blue card" that I found in boxes ... has the name "Edward J. Mosher" stamped across the top of the front (maybe someday i'll get a scanner and put it up on garlic).

cambridge had a habit of trying to have acronyms that were people's initials. I've mentioned before that compare and swap was Charlie's initials. I've also mentioned that GML were initials of people at cambridge ... where GML begate SGML which begate HTML which begate XML, ECML, FSML, ... and some number of other MLs. Well Mosher is the "M" in all of these MLs. To tie it back to the thread ... are all these MLs imitations of the original (or offspring)?

random ref:
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
http://www.garlic.com/~lynn/96.html#24 old manuals
http://www.garlic.com/~lynn/97.html#9 HELP! Chronology of word-processing
http://www.garlic.com/~lynn/97.html#26 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/98.html#16 S/360 operating systems geneaology
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
http://www.garlic.com/~lynn/99.html#91 Documentation query
http://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
http://www.garlic.com/~lynn/2000.html#8 Computer of the century
http://www.garlic.com/~lynn/2000.html#34 IBM 360 Manuals on line ?
http://www.garlic.com/~lynn/2000e.html#23 Is Tim Berners-Lee the inventor of the web?
http://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Economic Factors on Automation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Economic Factors on Automation
Newsgroups: comp.robotics.misc,comp.ai.philosophy,sci.econ,alt.folklore.computers
Date: Sun, 08 Apr 2001 16:43:52 GMT
Carlos Antunes writes:
In a true capitalist society there is always enough jobs for everybody. Note that the mere existence of one person creates demand for products and services to satisfy that person's needs. Therefore, by definiton, that person as a job the moment he or she is born.

one of the problems is effect of geographic distance where products may have wide geographic exposure while labor may not (i.e. few people willing to relocate to any arbritrary place in the world as well as anybody in the world can compete for any job).

there is the whole thread of the US automobile industry and foriegn competition.

interesting side effect that was written up (I believe i saw it in washington post) where quotas was established for inexpensive foriegn imports. the foriegn companies then apparently realized that given the quotas they would max. the quota almost regardless of the price of the car ... so they quickly changed their product offering to be something like three times more expensive (and significantly more profit).

One point of the article was that w/o the downward price pressure (of lots of cheap imports) that it allowed american industry to significantly raise their prices w/o having to actually change their product ... and it raised the issue if the gov. shouldn't impose an 100% "unearned" profit tax on the american industry.

Another side-effect was that the (then) current industry standard was it took seven years elapsed time to produce a new automobile. Effectively as part of the product re-organization, foreign competition invented new procedures where they could produce a new offering in three years elapsed time. This innovation resulted in foreign competition being able to adapt to changing consumer demands better than twice as fast as domestic industry. In effect, the us industry was on its way to obsoleting themselves ... by the time it had come out with a new offerring ... it might be already obsolete and market had gone through two new generations.

this isn't simply a question of automation but also innovation.

something strictly based on commodity hourly labor ... if a new procedure cut the time in half to produce something, then a theory of commodity hourly labor would result in them only getting half as much.

Applying the theory of commodity hourly labor at the organization level would imply that the organization would only receive half as much for doing something in half the time.

the issue of innovation (in conjunction with automation) is having significant effect on labor. Lots of labor has been in association with capital intensive manufacturing plants. Many of the manufacturing plants have had 20-100 year lifetimes ... resulting in little labor disruption over long periods of time.

Innovation is not only making specific products (and associated labor training) obsolete but has also started making the associated manufacturing plants obsolete. This is possibly more readily seen in chip fabrication plants, where capital costs are in the multi-billion dollar range and life-expectancy can be 2-3 years (initial plant capital costs have to be amortized over the chips produced in the life-time of the plant ... and these costs can dominate all other factors).

In more traditional manufacturing there has been move focus on automation ... but the issue of innovation can be as or more significant. Much of cutting 7years to 3years elapsed time for new product was associated with business process innovation (as it was anything to do with plant automation) The 3year elapsed time for new product is as much labor as any final assembly manufacturing process.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

IBM was/is: Imitation...

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was/is: Imitation...
Newsgroups: alt.folklore.computers
Date: Sun, 08 Apr 2001 16:10:42 GMT
Anne & Lynn Wheeler writes:
some thread-drift ... is it imitation or offspring

another well-known example is relational and System/R ... by the time any relational product offerings made it out the door ... there were a number of RDBMS products by other vendors.

random refs:
http://www.garlic.com/~lynn/2000.html#9 Computer of the century
http://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
http://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
http://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

A beautiful morning in AFM.

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A beautiful morning in AFM.
Newsgroups: alt.folklore.military
Date: Sun, 08 Apr 2001 17:27:50 GMT
BobMac writes:
I've had the same go 'round in various places: "Don't tell the users anything more than the bare minimum that WE think they need to know." My own take was more the Vimy Ridge/Entebbe operational plan: Unless it's Top Secret, spread knowledge around, form working relationships even with those outside IT.... Of course, one place I worked, I used to go visit Accounting, just so I could talk to warm, pleasant, sociable human beings.....

one of boyd's briefings as to the effect of WWII on US corporate life was that the german army had a large body of professional soldiers ... and by contrast the US eventually had to throw a huge numbers of quickly trained people into the field. With the requirement for large numbers of quickly trained, inexperienced people in the field, an infrastructure of top-down, rigidly controlled operation was created.

at least by the mid-80s the individuals that had received their indoctrination into organizational management during WWII were starting to dominate executive positions (in both commerical and non-commercial worlds); and their organizational style reflected the rigidly controlled, top-down style needed to handle huge numbers of individuals with scant training and no experience (regardless of the actual composition of the organization).

& of course, random ref:
http://www.belisarius.com/
http://web.archive.org/web/20010722050327/http://www.belisarius.com/
http://www.garlic.com/~lynn/94.html#8

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

anyone have digital certificates sample code

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: anyone have digital certificates sample code
Newsgroups: sci.crypt
Date: Sun, 08 Apr 2001 20:51:06 GMT
"normang" writes:
Does anyone know of sample working code to create digital certs.

We are trying to write a system for user authentication using our own digital certificates for a internal user base (and so not have to shell out to Verisign every time!). We intend to use ebcrypt as the basis for the encryption requirements and transfer the packages using tcp/ip.

Thanks in advance.

Basically we want to issue x509 certs of out own and user a Kerberos type system


even simpler would be to take radius and implement digital signature authentication (i.e. public key recorded in an internal radius database) for user authentication. then the radius protocol allows for a wide-range of applications with access to the real-time database.

aka the registration authority part of registering public key w/o having to do the certification authority piece (i.e. since they are internal they presumably don't need 3rd party certification) ... and/or w/o having to implement offline trust propagation (which is the fundamental purpose of issuing a certificate .... i.e. trust propagation that has been certified into offline environments).

random refs:
http://www.garlic.com/~lynn/aadsm2.htm#pkikrb PKI/KRB
http://www.garlic.com/~lynn/aadsm3.htm#kiss7 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aadsm4.htm#7 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
http://www.garlic.com/~lynn/aadsm4.htm#10 Thin PKI won - You lost
http://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
http://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
http://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Just a guick remembrance of where we all came from

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Just a guick remembrance of where we all came from
Newsgroups: comp.arch
Date: Mon, 09 Apr 2001 01:30:14 GMT
Jim Purcell writes:
I have never understood the constant reiteration of the idea that vacuum tubes were of poor quality, and that baloney about them failing within hours. Vacuum tubes were certainly not the ideal device once transistors had been invented, but tubes did not 'die' quickly in other applications, i.e. audio amplifiers and

i remember my dad taking all the tubes out of TV ... marking their position and taking them down to a serve yourself tubetester in nearby store ... maybe 10-15 tubes ... to try and figure out which one died. Might have to do this a couple times a year ... individual tubes might have lifetimes of 5+ years ... but with 10-15 tubes in a five year old set .... there seemed to be a couple random failures per year.

I did similar operation later ... but possibly with only five tubes (instead of 10-15 tubes).

I don't know the service/cycle time for the tubes ... but when i got to school they still had a 709 with thousands of tubes. TVs might only have service time of hr or two a day ... the 709 tended to be powered on constantly and there were at least a couple tubes a week that went

one 709 might have the equivalent number of tubes of 1000 or more audio amplifiers, a 709 problem was the equivalent of any one tube in any one of 1000 or more amplifiers having a problem.

a 709 with maybe 20,000? tubes and for argument sake, each tube had a life-time of five years. If it was straight MTBF with uniform distribution, then five years has about 44,000hrs and you might expect some tube failure every two hrs. However, the distribution should be skewed towards higher failure rates later in life cycle ... so a 709 that had been operating for more than five years would be expected to experience even a higher failure rate (some tube failing every hr or so).

Some tube (out of 20,000?) after more than five years of nearly continuous operation ... failing every day or so seems to be very reliable tubes (i.e. less than 1/10th the failure rate calculated so maybe more like MTBF of 50+ years ... with uniform distribution)

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

VTOC position

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTOC position
Newsgroups: bit.listserv.ibm-main
Date: Tue, 10 Apr 2001 15:04:43 GMT
rush-main@21CN.COM (Rush Yan) writes:
Is it still necessary for the modern dasd to put the VTOC at the middle of the dasd ?

Rush Yan


While an undergraduate I had started doing hand built sysgens with MFT11 ... i.e. taking the stageII sysgen's appart and re-ordering both job steps as well as re-ordering move/copy statements .... in order to get both datasets as well as PDS members ordered for optimum arm-seek distance.

I presented some results at SHARE, that for sample job stream the elapsed time was reduced by 60%-70% compared to a normal sysgen. The problem was that normal PTF activity replacing PDS members could degrade system performed by a factor of 2 over a period of six months.

IBM introduced VTOC placement with MVT15/16 that provided some additional optimization for ordering arm seek distances.

random refs:
http://www.garlic.com/~lynn/2000d.html#50
http://www.garlic.com/~lynn/2001.html#26

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

VTOC position

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTOC position
Newsgroups: bit.listserv.ibm-main
Date: Tue, 10 Apr 2001 16:17:29 GMT
Anne & Lynn Wheeler writes:
IBM introduced VTOC placement with MVT15/16 that provided some additional optimization for ordering arm seek distances.

in theory, starting with 3880-13 full track caching controller in the early '80s, arm seek optimization was somewhat mitigated for the highest used data since they would be resident in the cache. there was still some arm seek benefit in the placement of the residual data that didn't have frequency of use where it would have high probability of being resident in the cache.

note that the original published numbers for the 3880-13 cache hit ratios was somewhat biased. they showed a 90% hit rate for certain types of activity. this activity was sequential access with 10 records per track ... and of course, the first reference to a record on the track would be a miss and bring in the whole track. the subsequent 9 sequential record reads were all "hits" ... resulting in 9 hits out of 10 or 90% cache hit ratio. however, the same effect could have been obtained with 10 record buffering and chained i/o w/o even needing a cache.

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

SSL certificate question...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL certificate question...
Newsgroups: comp.security.misc
Date: Mon, 09 Apr 2001 22:07:07 GMT
"The§eidh" writes:
We are running a secure server with an equifax signed 128bit certificate. This means that If people go to our site with a 40 or 56bit browser they get error 403.5 (SSL forbiden) if they try to go to a secure area.. Some companies, like paypal.com use 128 bit certs but you can still view them with an older or non us browser. Does anyone know how this is possible? also, if there are more appropriate groups to post to, please tell me.

SSL/TLS has server/client do protocol negotiation to establish a number of things ... inlucding size of the dynamically generated session symmetric key. a certificate might have a specification regarding the minimum key size acceptable in the protocol negotiation ... but it doesn't mean that a server implementation can't do something different.

--
Anne & Lynn Wheeler | lynn@garlic.com, http://www.garlic.com/~lynn/

OT Re: A beautiful morning in AFM.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT Re: A beautiful morning in AFM.
Newsgroups: alt.folklore.military
Date: Fri, 13 Apr 2001 22:52:25 GMT
jimlillie@aol.com (JimLillie) writes:
Then there was the memo listing ALL the restrictions management had put on passwords; that when THE valid password was computed we would all be notified what to use.

i was sent a copy of that early and shared it with a couple of people .... it was dated 4/1 (which was a sunday that year). over the weekend somebody printed it on official letterhead and put it up on all the corporate bulletin boards at our site. Even tho it was clearly dated sunday, 4/1 a number of people took it seriously on monday. later they locked all corporate letterhead paper in cabinets.

what it actually said was that each person had to go to the site security officer to obtain the one & only valid password that met all the restrictions and conditions.

random ref:
http://www.garlic.com/~lynn/99.html#52

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

OT Re: A beautiful morning in AFM.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT Re: A beautiful morning in AFM.
Newsgroups: alt.folklore.military
Date: Sat, 14 Apr 2001 01:43:12 GMT
the original ....

CORPORATE DIRECTIVE NUMBER 84-570471                    April 1, 1984

In order to increase the security of all xxx computing facilities, and to avoid
the possibility of unauthorized use of these facilities, new rules are being put
into effect concerning the selection of passwords.  All users of xxx computing
facilities  are  instructed to change their passwords to conform to these rules
immediately.

RULES FOR THE SELECTION OF PASSWORDS:

   1. A password must be at least six characters long, and must not contain two
      occurrences of a character in a row, or a sequence of two or more characters
      from the alphabet in forward or reverse order.
      Example:  HGQQXP is an invalid password.
               GFEDCB is an invalid password.

   2. A password may not contain two or more letters in the same position as any
      previous password.
      Example:  If a previous password was GKPWTZ, then NRPWHS would be invalid
               because PW occurs in the same position in both passwords.

   3. A  password may not contain the name of a month or an abbreviation for a
      month.
      Example:  MARCHBC is an invalid password.
               VWMARBC is an invalid password.

   4. A  password  may  not  contain  the  numeric  representation  of  a  month.
      Therefore, a password containing any number except zero is invalid.
      Example:  WKBH3LG is invalid because it contains the numeric representation
               for the month of March.

   5. A password may not contain any words from any language.  Thus, a password
      may not contain the letters A, or I, or sequences such as AT, ME, or TO
      because these are all words.

   6. A password may not contain sequences of two or more characters which are
      adjacent to each other on a keyboard in a horizontal, vertical or diagonal
      direction.
      Example:  QWERTY is an invalid password.
               GHNLWT is an invalid password because G and H are horizontally
               adjacent to each other.
               HUKWVM  is  an  invalid  password  because H and U are diagonally
               adjacent to each other.

   7. A password may not contain the name of a person, place or thing.
      Example:  JOHNBOY is an invalid password.

Because of the complexity of the password selection rules, there is actually only
one password which passes all the tests.  To make the selection of this password
simpler  for  the  user,  it will be distributed to all managers.  All users are
instructed  to obtain this password from his or her manager and begin using it
immediately.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

April Fools Day

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: April Fools Day
Newsgroups: alt.folklore.computers
Date: Sat, 14 Apr 2001 02:07:24 GMT
"Donald Tees" writes:
Well, to-morrow is the first April Fools Day of the new millenium. I have a DOS Version 1.00 disk, Copyright IBM (no microsoft stuff here), that I will offer as the prize for the Internet message that cons the most people.

Does anybody remember some of the better ones during the first millenium?

Donald


a little late ... refs:
http://www.garlic.com/~lynn/99.html#52
http://www.garlic.com/~lynn/2001d.html#51

april 1st was sunday ... somebody printed this on corporate letterhead and posted it on all the bulletin boards over the weekend. many people reading it on monday didn't catch on. later all corporate letterhead paper was locked in cabinets. corporate name has been "xxx" to protect the innocent.


CORPORATE DIRECTIVE NUMBER 84-570471                    April 1, 1984

In order to increase the security of all xxx computing facilities, and to avoid
the possibility of unauthorized use of these facilities, new rules are being put
into effect concerning the selection of passwords.  All users of xxx computing
facilities  are  instructed to change their passwords to conform to these rules
immediately.

RULES FOR THE SELECTION OF PASSWORDS:

   1. A password must be at least six characters long, and must not contain two
      occurrences of a character in a row, or a sequence of two or more characters
      from the alphabet in forward or reverse order.
      Example:  HGQQXP is an invalid password.
               GFEDCB is an invalid password.

   2. A password may not contain two or more letters in the same position as any
      previous password.
      Example:  If a previous password was GKPWTZ, then NRPWHS would be invalid
               because PW occurs in the same position in both passwords.

   3. A  password may not contain the name of a month or an abbreviation for a
      month.
      Example:  MARCHBC is an invalid password.
               VWMARBC is an invalid password.

   4. A  password  may  not  contain  the  numeric  representation  of  a  month.
      Therefore, a password containing any number except zero is invalid.
      Example:  WKBH3LG is invalid because it contains the numeric representation
               for the month of March.

   5. A password may not contain any words from any language.  Thus, a password
      may not contain the letters A, or I, or sequences such as AT, ME, or TO
      because these are all words.

   6. A password may not contain sequences of two or more characters which are
      adjacent to each other on a keyboard in a horizontal, vertical or diagonal
      direction.
      Example:  QWERTY is an invalid password.
               GHNLWT is an invalid password because G and H are horizontally
               adjacent to each other.
               HUKWVM  is  an  invalid  password  because H and U are diagonally
               adjacent to each other.

   7. A password may not contain the name of a person, place or thing.
      Example:  JOHNBOY is an invalid password.

Because of the complexity of the password selection rules, there is actually only
one password which passes all the tests.  To make the selection of this password
simpler  for  the  user,  it will be distributed to all managers.  All users are
instructed  to obtain this password from his or her manager and begin using it
immediately.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

VM & VSE news

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: VM & VSE news
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 13 Apr 2001 21:52:26 -0600
"Tom Duebusch" writes:
I think you are right. The IBM 370s were the last systems that was totally hardware execution of the S/370 instruction set. Seems like microcode came out with the next series (303X or was 308X first?).

Now a days, I would consider a real S/390 machine as one manufactured by IBM. The rest, NUMA-Q, iFrame etc are PCMs. I'm not saying that the PCMs are not as good, just different.

Tom Duerbusch THD Consulting


random. 370 m'code references:

http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#51 Rethinking Virtual Memory
http://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
http://www.garlic.com/~lynn/97.html#20 Why Mainframes?
http://www.garlic.com/~lynn/98.html#26 Merced &amp; compilers (was Re: Effect of speed ... )
http://www.garlic.com/~lynn/99.html#90 CPU's directly executing HLL's (was Which programming languages)
http://www.garlic.com/~lynn/99.html#116 IBM S/360 microcode (was Re: CPU taxonomy (misunderstood RISC))
http://www.garlic.com/~lynn/99.html#187 Merced Processor Support at it again . . .
http://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc.
http://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc.
http://www.garlic.com/~lynn/2000.html#8 Computer of the century
http://www.garlic.com/~lynn/2000.html#12 I'm overwhelmed
http://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
http://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
http://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
http://www.garlic.com/~lynn/2000.html#86 Ux's good points.
http://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
http://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
http://www.garlic.com/~lynn/2000c.html#19 Hard disks, one year ago today
http://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000c.html#83 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#20 S/360 development burnout?
http://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000e.html#6 Ridiculous
http://www.garlic.com/~lynn/2000e.html#54 VLIW at IBM Research
http://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
http://www.garlic.com/~lynn/2000f.html#37 OT?
http://www.garlic.com/~lynn/2000f.html#55 X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
http://www.garlic.com/~lynn/2000f.html#57 X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
http://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#8 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
http://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
http://www.garlic.com/~lynn/2001b.html#40 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#49 PC Keyboard Relics
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#1 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#3 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#87 "Bootstrap"
http://www.garlic.com/~lynn/2001d.html#22 why the machine word size is in radix 8??
http://www.garlic.com/~lynn/2001d.html#26 why the machine word size is in radix 8??

VM & VSE news

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: VM & VSE news
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 13 Apr 2001 21:57:21 -0600
"Tom Duebusch" writes:
I think you are right. The IBM 370s were the last systems that was totally hardware execution of the S/370 instruction set. Seems like microcode came out with the next series (303X or was 308X first?).

Now a days, I would consider a real S/390 machine as one manufactured by IBM. The rest, NUMA-Q, iFrame etc are PCMs. I'm not saying that the PCMs are not as good, just different.

Tom Duerbusch THD Consulting


and random reference on PCM from recent "imitation" thread in alt.folklore.computers newsgroup

http://www.garlic.com/~lynn/2001d.html#35

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Sat, 14 Apr 2001 15:49:52 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Only because POSIX has made such a pig's ear of asynchronous I/O. I am not going to claim that the MVT Access Methods were any better, because their user interface was unspeakable, but I used to be able to get 90% of peak hardware efficiency from unmodified Fortran. I am lucky to be able to get 30% under Unix.

the native application I/O interface was async, direct I/O (i.e. application initiated i/o that transferred directly between application memory and hardware) using an asynchronous protocol (i.e. EXCP/SVC0 .... "execute channel program"/supervisor call zero).

"supervisor services" then wrappered a bunch of library stuff around EXCP. This was supervisor I/O library services that would be dynamically loaded/specified at file open time. supervisor services was complex ... not so much that any one service was complicated but because there were so many different flavors.

one of the simplest was "move mode" get/put where the supervisor library services handled all syncronizing issues and moving data from/to program data buffer and internal i/o buffers. Then there was "locate mode" where pointers were passed back and forth between application program and internal i/o buffers.

then you could get into read/write where application program had control of WAIT/POST ECB syncronization services.

disk I/O could have a lot of additional flavors where the library services supported sequential and non-sequential access along with various forms of indexes.

In addition, library services supported both fixed-length and variable-length records. Instead of having implicit lengths (aka the existing C language services that are the root of very large percentage of security/integrity exploits over the years), variable-length records had explicit lengths.

random refs:
http://www.garlic.com/~lynn/aadsm5.htm#asrn4
http://www.garlic.com/~lynn/aadsm5.htm#asrn1

in part because there was so much overhead at file-open time, a number of subsystem monitors sprung up ... which provided a restricted subset of services against files that had effectively been pre-opened. This predated the DBMS implementations which effectively were a follow-on that added justifications like integrity and transaction semeantics for implementing subsystem monitors.

A significant amount of business critical commercial data processing continue to be deployed on some of these subsystem montiors that appeared in the late '60s or early '70s (like CICS and IMS).

random refs:
http://www.garlic.com/~lynn/99.html#71
http://www.garlic.com/~lynn/2001d.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Impact of Internet

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Impact of Internet
Newsgroups: comp.arch,comp.lang.java.advocacy,comp.object,comp.os.linux.advocacy,comp.os.ms-windows.nt.advocacy,comp.theory,misc.invest.stocks
Date: Sat, 14 Apr 2001 15:56:54 GMT
"2 + 2" <2-2@web.com> writes:
We have seen the rise of the internet, especially beginning with its web phase.

The tremendous web bubble, now collapsing at least partially, can be attributed to speculation, etc.

There is no question that, aside from a frenzy to build web sites, the computer industry as a whole ITSELF has converged on the web, making tremendous investments in web-related technologies.

Now is a time for sober second thoughts.

In particular, how should the value of the web be analyzed in terms of technology, societal impact, etc.?


random ref:
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3
http://www.garlic.com/~lynn/internet.htm
http://www.garlic.com/~lynn/2001d.html#42
http://www.garlic.com/~lynn/2001b.html#50
http://www.garlic.com/~lynn/99.html#197
http://www.garlic.com/~lynn/2000e.html#0

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Very CISC Instuctions (Was: why the machine word size ...)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very CISC Instuctions (Was: why the machine word size ...)
Newsgroups: alt.folklore.computers
Date: Mon, 16 Apr 2001 15:04:24 GMT
Stan Sieler writes:
sometimes called a "baby Burroughs" ... and a machine I'm still on!). Within a couple of months, after learning how the HP 3000 hardware security worked, I was able to design a successful attack on the Burroughs security (involving passing a too-short buffer to an intrinsic that I knew called RCCGETPRIVILEGED). I told RCC about it, and they patched the hole easily, but my point was that exposure to different ways of thinking can sometimes reveal holes that you might otherwise miss.

after having written operating system, debugged operating system, been 1st, 2nd, 3rd level support for operating system, and having developed a number of debugging and problem analysis tools .... there were a couple of frequent problems, dangling pointers and (frequently associated) asynchronous activities. i had some experience with string & buffer length problems but not in significant numbers.

when doing analysis of unix & C language in the '80s for high availability product, C language convention of implicit lengths was identified as possibly increasing buffer problems (& exploits) by a couple orders of magnitude (at least to what we were familiar with).

reference to assurance panel discussion i was on at Intel Developer's conference:
http://www.garlic.com/~lynn/aadsm5.htm#asrn4

random refs:
http://www.garlic.com/~lynn/ansiepay.htm#theory Security breach raises questions about Internet shopping
http://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
http://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
http://www.garlic.com/~lynn/2000.html#25 Computer of the century
http://www.garlic.com/~lynn/2000.html#30 Computer of the century
http://www.garlic.com/~lynn/2000b.html#17 ooh, a real flamewar :)
http://www.garlic.com/~lynn/2001b.html#58 Checkpoint better than PIX or vice versa???
http://www.garlic.com/~lynn/2001c.html#32 How Commercial-Off-The-Shelf Systems make society vulnerable
http://www.garlic.com/~lynn/2001c.html#38 How Commercial-Off-The-Shelf Systems make society vulnerable
http://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Mon, 16 Apr 2001 15:26:02 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
I know that the FreeBSD tuners out there run bucket loads of benchmark tests on every new change, and anything that gets in the way of disk IO performance is considered to be a serious flaw.

Doubtless. But, given their background, it is unlikely that they are aware of what was done before and why the basic approach needs changing to get better efficiencies. There is a VERY simple way to check:


Last time I checked there was something like five data copies between application space thru things like NFS/TCP protocol stack before hitting the wire (but I haven't looked at any current FreeBSD code).

For small buffer sizes, buffer copies can be lost in the rest of the protocol pathlength. For 8k byte or larger sized buffer copies the processor time for the data movement can start to dominate the pathlength. Some protocol stacks have been tuned to approach hardware thruput at expense of dedicating the processor.

a simple check for high i/o efficiencies with large amounts of data is 1) near 100 percent hardware transfer rates and 2) near zero processor utilization (i.e. which would leave most of the processor available for actually executing some application code that might deal with the data) and 3) application space code (either in application or libraries) utilizes some serialization primitives.

somewhat at issue is w/o some sort of buffer copy, there is either no concurrent application execution or there needs to be some application space syncronization code (i.e. application space code is either blocked during buffer transfers or application space code has to utilize some sort of serialization primitives with multiple buffering logic).

there are some hacks that can be done in this area manipulating virtual memory constructs .... application space still needs some amount of multiple buffering logic ... but there are games that can be played with implicit buffer serialization by manipulating virtual memory (w/o needing explicit the application space code to explicitly execute serialization operations).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
Newsgroups: bit.listserv.ibm-main
Date: Mon, 16 Apr 2001 15:57:50 GMT
edgould@WORLDNET.ATT.NET (Edward Gould) writes:
Back in the the 70's (yes 2314's and 3330's ) I am pretty sure the the number of entries in the VTOC made a significant difference as the search time for a data-set on a volume that contained 1000's of small data-sets was substantial. That was (I think) one of the reasons why FDR's compaction program was such a huge success (there were other reasons to be sure).

slightly off topic ... PDS directory ... not VTOC.

I got to shoot a problem at at large retail chain in the early '80s .... all data processing for all regions, branches, stores, were done at hdqtrs with multiple machines sharing common library on 3330 drive.

they were experiencing random slow downs across some or all the machines.

I came into a class room that had foot high printed output of MVS performance data completely covering half dozen or so class room tables.

cpu went up, cpu went down, some drive activity went up, some drive activity went down. After a couple hrs, I noticed a slight correlation between periods as identified as "slow" with a consistent utilization on a particular 3330 (out of maybe 50-60 drives) at 6 I/Os per second.

Turns out that it was the common library drive, the library PDS had a three cylinder directory; the "consistent" drive I/O rate at 6/sec was peak saturation ... 19 tracks, multi-track search, 3600RPM, 60 rotations per second; about 1/3 of second I/O for search, plus a member read or two.

Nobody was looking for totally saturated device with long queues when peak i/o rate for the drive was 6.5/sec (aggregate, across all processors in the complex).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
Newsgroups: bit.listserv.ibm-main
Date: Mon, 16 Apr 2001 16:36:17 GMT
Anne & Lynn Wheeler writes:
I got to shoot a problem at at large retail chain in the early '80s .... all data processing for all regions, branches, stores, were done at hdqtrs with multiple machines sharing common library on 3330 drive.

Nobody was looking for totally saturated device with long queues when > peak i/o rate for the drive was 6.5/sec (across all processors in the > complex).

.... and nobody informed me beforehand about the common library until i started asking questions about the particular drive and why did it seeem to have a consistent uniform 6 i/os per second correlating with periods of "slow-down" (i.e. first couple passes thru the data didn't turn up anything .... so i had to ask for them to specify what periods were they "subjectively" experiencing slow-down) ... while the rest of the time it had i/o rates in the 1-5 per second range.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

OT Re: A beautiful morning in AFM.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT Re: A beautiful morning in AFM.
Newsgroups: alt.folklore.military
Date: Mon, 16 Apr 2001 16:50:05 GMT
jimlillie@aol.com (JimLillie) writes:
This probably was an April Fool at some sites. But at Boulder IBM it came around a day after a security lecture in which most of the password restrictions were explicity given. We literally had a problem finding passwords. They were mostly dropped about a week later on the rumor grounds that they simplified password hacking by eliminating a large percent of the possible choices. Increased security = increased risk !

it is possible that somebody had it squirreled away and brought it out for the purpose.

there was also a write up done on frequency of changing passwords resulting in the postit note scenerios .... more complex, less frquently changed passwords were more secure because of various human factors.

the rest of the write-up correlated password changing with changing combinations for combination locks and keys for keyed locks; i.e. security officers were required to change combinations in all combination locks for the site at least twice an hour ... and that keyed locks at the site had to be rekeyed and new keys issued twice a minute (desks, doors, cabinets, etc).

individuals had to always be present to receive the twice a minute new keying material and also had to later be able to account for all keying material received.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Mon, 16 Apr 2001 17:12:32 GMT
Anne & Lynn Wheeler writes:
a simple check for high i/o efficiencies with large amounts of data is 1) near 100 percent hardware transfer rates and 2) near zero processor utilization (i.e. which would leave most of the processor available for actually executing some application code that might deal with the data) and 3) application space code (either in application or libraries) utilizes some serialization primitives.

long ago, and far away ... i had done some product optimization for RFC1044 support so that a (aging) 370/4341 could drive HYPERchannel connection at hardware thruput (in the tests, talking to a Cray at the other end) ... while using only very modest amounts of cpu (the 4341 only kept up with the Cray in the sense that they were both driving the link at hardware speeds, most of the cray processing was used for the application).

i thot it was pretty good for the 4341 since it was only about a 1mip processor.

random refs:
http://www.garlic.com/~lynn/2000.html#90
http://www.garlic.com/~lynn/2000d.html#0

so scale the hardware transport by a factor of 100 (say something like HIPPI or FCS) and the processor speed by a factor of a couple hundred; current CPU utilization would then be expected to be a couple percent for trivial loop transmitting/receiving data.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
Newsgroups: bit.listserv.ibm-main
Date: Mon, 16 Apr 2001 19:33:46 GMT
Anne & Lynn Wheeler writes:
three cylinder directory; the "consistent" drive I/O rate at 6/sec was peak saturation ... 19 tracks, multi-track search, 3600RPM, 60 rotations per second; about 1/3 of second I/O for search, plus a member read or two.

the egregious thing was that was those 1/3second i/o ops not only busied the drive but the string, controller, and channel ... not only was the drive busy but a lot of other drives were also not accessable.

i got to work on a non-CKD solution for MVS ... but STL quoted something like cost of $26m to deploy ... it represented a significant system thruput improvement even if CKD drives were being used (but would also work on fba-like devices). now, at least there are indexed vtocs and pdse.

random refs:
http://www.garlic.com/~lynn/97.html#16 Why Mainframes?
http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
http://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#74 Read if over 40 and have Mainframe background
http://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
http://www.garlic.com/~lynn/2000.html#86 Ux's good points.
http://www.garlic.com/~lynn/2000b.html#71 "Database" term ok for plain files?
http://www.garlic.com/~lynn/2000d.html#42 360 CPU meters (was Re: Early IBM-PC sales proj..
http://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
http://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000f.html#18 OT?
http://www.garlic.com/~lynn/2000f.html#19 OT?
http://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
http://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
http://www.garlic.com/~lynn/2001.html#12 Small IBM shops
http://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
http://www.garlic.com/~lynn/2001.html#55 FBA History Question (was: RE: What's the meaning of track overfl ow?)
http://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001d.html#5 Unix hard links
http://www.garlic.com/~lynn/2001d.html#48 VTOC position
http://www.garlic.com/~lynn/2001d.html#49 VTOC position

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Wed, 18 Apr 2001 02:47:48 GMT
Anne & Lynn Wheeler writes:
so scale the hardware transport by a factor of 100 (say something like HIPPI or FCS) and the processor speed by a factor of a couple hundred; current CPU utilization would then be expected to be a couple percent for trivial loop transmitting/receiving data.

ok, remember the 4341 was about a 1 mip processor.

so a benchmark on 1ghz pentium, 1gbyte memory, redhat7/linux2.4, with 36gbyte, 10k RPM scis disk (160mb scsi, disk 41mbyte-62mbyte internal spead).

i performed dd of a 1.3gbyte file to /dev/null took took 64secs elapsed and total 10secs cpu

or about 21mbytes/sec elapsed (about 50% of the 41mbyte/sec internal drive transfer speed).

just for comparison dd of the same file to another file took over 4mins elapsed (to be expected since it was the same drive) and about 25secs cpu (not a whole lot of overhead in writing to /dev/null).

now back to the 4341/rfc1044 implementation .... dd on 1ghz pentium is getting about 40times as much mbytes per cpu second as the 4341/rfc1044 implementation. Given what numbers you believe the 1ghz pentium is somewhere between 1500-2000 mip processor (compared to the 1mip 4341).

taking the @2000mip rating, linux/dd is then executing 50 times as many instructions per mbyte transfered as the 4341/rfc1044 implementation; not 100 percent, not ten times, but fifty times as many instructions per mbyte transferred.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Wed, 18 Apr 2001 03:52:56 GMT
Anne & Lynn Wheeler writes:
so a benchmark on 1ghz pentium, 1gbyte memory, redhat7/linux2.4, with 36gbyte, 10k RPM scis disk (160mb scsi, disk 41mbyte-62mbyte internal spead).

so lets look at it slightly differently ...

following is from something in the very early '80s, 20 years ago comparing "mainframes" from 20 years ago to "mainframes" about 30+ years ago .... early posting

http://www.garlic.com/~lynn/93.html#31


system          3.1L    HPO             change
machine         360/67  3081            47* (mips)
pageable pages  105     7000            66*
users           80      320             4*
channels        6       24              4*
drums           12meg   72meg           6*
page I/O        150     600             4*
user I/O        100     300             3*
# disk arms     45      32              4*?perform.
bytes/arm       29meg   630meg          23*
avg. arm access 60mill  16mill          3.7*
transfer rate   .3meg   3meg            10*
total data      1.2gig  20.1gig         18*

so how would my little linux system compare to the "mainframe" of 30+ years ago

  system          3.1L    Linux           change
  machine         360/67  1ghz(dual)      8000*
  pageable pages  105     1gbyte          2000*
  users           80      1
  channels        6       1
  drums           12meg   0
  page I/O        150/sec -
  user I/O        100/sec -
  # disk arms     45      1
  bytes/arm       29meg   36gbyte         1500*
  avg. arm access 60mill  4.5mill         13*
  transfer rate   .3meg   40-60mbyte      133-200*
  total data      1.2gig  36gbyte         30*

the '67 might substain 100 "user" i/os per second for much of first shift, someplace between .5mbytes to 1mbyte per second aggregate (reads and writes). In terms of random operations, the fast scsi disk is only about 13 times faster ... for it to achieve 130 times faster it has to do an awful large number of contiguous block transfers.

At the best case, assuming that the Linux system is keeping both processors 100 percent busy, the configuration is only capable of moving about 1/60th as many disk mbytes per mip executed, optimal case with contiguous transfer; which might drop to 1/600th as many disk mbytes moved per mip executed if there is much arm movement

aka, assuming same number of bytes moved per arm motion, if the arm is 13 times faster, and the processor is 8000 times faster, the relative speed of the disk subsystem (compared to the processor speed) has decreased by a factor of 600 times. however, note that the '67 configuration with 45 disk arms ... even being 13 times slower ... could still do an aggregate of three times as many arm movements as a single current arm. Taking the disk subsystem as a whole (1 arm versus 45 arms that are 13 times slower), the actual relative disk subsystem speed (compared to the processor speed) has decreased by a factor of nearly 1800 times.

In theory then, if both pentium processors are operating at 100% utilization ... and the single arm is operating at 100% utilization ... compared to the '67 system operating at 100% cpu utilization and all 45 arms operating at 100% utilization, the current number of disk operations per mip executed has decreased by a factor of 1800 times.

So if you take aggregate number of mips when executing at 100% utilization (for both sysems) and divide it by the total number of disk operations when the respective disk subsystem are operating at 100% utilization ... then the mips/disk-ops will have increased by 1800 times (comparing the linux/dual 1ghz system to the '67 system).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Wed, 18 Apr 2001 15:50:23 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
Whoa, you can give the 1GHz Pentium III a 2000 mip rating if you want, but we all know it is far, far slower than that when using data that is all outside the cache. Ideally we'd run SPEC2000 on the 4341 and compare performance for memory bound jobs, but that's probably a bit impractical... Maybe we could compare Linpack figures (assuming the 4341 has FP hardware?) for array sizes that will be well outside cache on both machines (100x100 for the 4341, 1000x1000 for the Pentium III) as a better ballpark figure than MIPS (which in this case definitely lives up to the moniker Meaningless Information about Processor Speed)

here are rain/rain4 numbers for 4341, 6600, 168-3, 91

http://www.garlic.com/~lynn/2000d.html#0

I did find dhrystones for pentium pro 200 that rated it at 453mips. I have some numbers on one of my own (non-FP) application that has twice the thruput on pii-400 as p-pro (900mips?) and has 2.5 times the thruput on 1ghz processor as on my pii-400. that puts the 1ghz processr at 5 times the thruput of p-pro for a specific application that I have used over the past six years. 5*450mips = 2250 mips (for whatever reason, this particular application thruput has scaled with the pentium clock rate).

The specific application is complex indexing, bit manipulation, pointer & storage management (and I keep detailed history of operations performed and cpu used, p-pro to 1ghz isn't direct comparison since the p-pro has bad scsi disk at the moment and the implementation has changed between the time i last ran it on p-pro and the current time). I use it for the rfc standard process and generates the rfc ietf index (as well as various glossaries)

http://www.garlic.com/~lynn/rfcietff.htm
http://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

I/O contention

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I/O contention
Newsgroups: bit.listserv.ibm-main
Date: Wed, 18 Apr 2001 17:13:41 GMT
smetz@NSF.GOV (Metz, Seymour) writes:
Disk. There was one disk drive per box. 8 exposures. You may be thinking of the maximum number of devices on the controller, which I believe was 2.

There have been other multiple exposure devices between the fixed-head disks and Shark, e.g., 3880-11.


3880-11 was controller 4k page record cache support ... that still had to have 3880s backing it. the 3880-13 was full=track cache. 3880-21 was enhanced 3880-11 larger cache and improved microcode ... as was the 3880-23.

i had once tried to get multiple exposure support for fixed head area on 3350s ... but it didn't make it out the door.

random refs:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/95.html#8 3330 Disk Drives
http://www.garlic.com/~lynn/95.html#12 slot chaining
http://www.garlic.com/~lynn/99.html#6 3330 Disk Drives
http://www.garlic.com/~lynn/99.html#8 IBM S/360
http://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
http://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#52 IBM 650 (was: Re: IBM--old computer manuals)
http://www.garlic.com/~lynn/2000d.html#53 IBM 650 (was: Re: IBM--old computer manuals)
http://www.garlic.com/~lynn/2000g.html#42 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
http://www.garlic.com/~lynn/2000g.html#45 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
http://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001d.html#24 April Fools Day
http://www.garlic.com/~lynn/2001d.html#49 VTOC position

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Block oriented I/O over IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Block oriented I/O over IP
Newsgroups: comp.arch
Date: Wed, 18 Apr 2001 18:12:56 GMT
"Stephen Fuld" writes:
The typical network guy, when thinking about performance thinks bandwidth. The typical I/O guy (at least the older ones) typically talk about latency first and bandwidth a distant second. When the simplest programs (say airline reservation transactions) require tens of I/Os and many programs require orders of magnetude more, the latency adds up. Protocols designed for I/O care a lot about latency. Network protocols seem not to. Physical SCSI, whether over parallel wires or encapsulated over Fibre Channel typically provides zero copies in the CPU (Unless messed up by a poor file system) and assymptotic to zero CPU instructions per packet (after a start up to send the command). I have heard Intel people talking about about

the best way is to think thruput. latency is aggravated by protocols that serialize ... regardless of bandwidth. 9333/SSA started doing asynchronous SCSI commands over serial copper in the late '80s. For multiple concurrent activity, SSA significantly outperformed vanilla SCSI even with all factors being essentially the same (bandwidth and with drives that were essentially identical except for the higher level protocol). The degree of improvement increased as the number of drives and number of concurrent operations increased.

Various people working TCP have worried about window size issues to help mask end-to-end latency issues (akin to multiple request queueing).

buffer copies have alwas been somewhat of an issue .... but as relative descrepency between cache performance and memory performance has increased ... the impact of multiple large buffer copies can dominate processor utilization. Of course poor coding implementations can also aggravate system thruput.

airline res systems can have serialization and latency thruput issues at several levels. several years ago, i had the opportunity to redesign and rewrite "routes" in one of the res systems (represented about 25% of total overall activity). one of the things i got to do was collapse three separate human interactions into a single transaction. The elapsed time for the resulting transaction was about the same as the most trivial of the original three, but with the agent only having to do a single transaction rather than three separate transactions resulted in much more than three times improvement.

random refs:
http://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
http://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
http://www.garlic.com/~lynn/94.html#50 Rethinking Virtual Memory
http://www.garlic.com/~lynn/95.html#13 SSA
http://www.garlic.com/~lynn/96.html#14 mainframe tcp/ip
http://www.garlic.com/~lynn/96.html#15 tcp/ip
http://www.garlic.com/~lynn/96.html#16 middle layer
http://www.garlic.com/~lynn/96.html#17 middle layer
http://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
http://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
http://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
http://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
http://www.garlic.com/~lynn/98.html#50 Edsger Dijkstra: the blackest week of his professional life
http://www.garlic.com/~lynn/98.html#59 Ok Computer
http://www.garlic.com/~lynn/99.html#1 Early tcp development?
http://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
http://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
http://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
http://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
http://www.garlic.com/~lynn/99.html#214 Ask about Certification-less Public Key
http://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
http://www.garlic.com/~lynn/2000.html#90 Ux's good points.
http://www.garlic.com/~lynn/2000.html#93 Predictions and reality: the I/O Bottleneck
http://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
http://www.garlic.com/~lynn/2000c.html#13 Gif images: Database or filesystem?
http://www.garlic.com/~lynn/2000c.html#23 optimal cpu : mem <-> 9:2 ?
http://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000d.html#80 When the Internet went private
http://www.garlic.com/~lynn/2000e.html#17 X.25 lost out to the Internet - Why?
http://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
http://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
http://www.garlic.com/~lynn/2000f.html#10 Optimal replacement Algorithm
http://www.garlic.com/~lynn/2000f.html#18 OT?
http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
http://www.garlic.com/~lynn/2000f.html#23 Why trust root CAs ?
http://www.garlic.com/~lynn/2000f.html#30 OT?
http://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2001.html#12 Small IBM shops
http://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001.html#46 Small IBM shops
http://www.garlic.com/~lynn/2001.html#73 how old are you guys
http://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
http://www.garlic.com/~lynn/2001b.html#36 [OT] Currency controls (was: First OS?)
http://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
http://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
http://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
http://www.garlic.com/~lynn/2001c.html#16 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
http://www.garlic.com/~lynn/2001c.html#74 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001c.html#79 Q: ANSI X9.68 certificate format standard
http://www.garlic.com/~lynn/2001d.html#63 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001d.html#65 Pentium 4 Prefetch engine?

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Wed, 18 Apr 2001 23:22:11 GMT
"Bill Todd" writes:
Since the context of the discussion was industrial-strength computing, the above observation seems to have limited relevance. And the immediate context was the alleged 'popularity' (or lack thereof), within the realm of presumably serious computer users, of a simplification move IBM made a long time ago, so the issue of popularity (among people who take their computing seriously) as a measure of worth was relevant.

note that the industrial strength computing platforms have tended to grow up from batch orientation ... where the operating system services paradigm was that it wasn't interfacing to a human but to an application. an out-growth of such a paradigm was an extensive set of traps, facilities and interfaces for applications to automagically handle everything (rather than assuming there was a human present to handle situations).

while these may not have been popular platforms from a end-user standpoint (paradigm that operating system interacts with an application rather than a paradigm that an operating system interacts with a human) ... they are very popular for industrial strength applications like payroll.

A couple years ago, One of the big financial infrastructures commented that two of the things they attribute to having 100% availability for the preceeding six years was
• ims hot-standby (aka clustering)
automated operator


batch systems tend to still have (relatively) small number of interactions involving an (human) operator ... who could still make mistakes (opportunity for mistakes are possibly orders of magnitude less than a operating system evovling from a user interaction paradigm, but still non-zero). automated operator was methodology for trapping any remaining interactions that would involve humans and implementing programmatic solutions.

random refs:
http://www.garlic.com/~lynn/94.html#2 Schedulers
http://www.garlic.com/~lynn/98.html#18 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/98.html#35a Drive letters
http://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
http://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
http://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
http://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/2000.html#13 Computer of the century
http://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe Market
http://www.garlic.com/~lynn/2000f.html#30 OT?
http://www.garlic.com/~lynn/2000f.html#54 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2000f.html#58 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
http://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
http://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
http://www.garlic.com/~lynn/subtopic.html#hacmp Cluster, High Availability and/or Loosely-Coupled
http://www.garlic.com/~lynn/subtopic.html#disk Disk Engineering

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Thu, 19 Apr 2001 15:41:04 GMT
"Bill Todd" writes:
Unix definitely has deficiencies in many areas - security, performance of its on-disk file structures, batch-oriented operation (as you point out), and more that (once again) it's getting too late to bother to try to gather and enumerate. But in the specific area of its management of file data once it obtains it from disk, it really does strike a good balance between efficiency and API simplicity - and I suspect that's one major reason for its popularity (at least I know that many people recoil in horror when they encounter something like VMS's RMS access-methods interface, and even if they instead use the VMS C RTL they need to explore optimizations related to configuring process-based buffers that simply aren't an issue with the Unix approach to file management).

well my wife and I worked on both the big iron and trying to deliver industry strength computing on open platforms.

random refs:
http://www.garlic.com/~lynn/99.html#71

my wife having done a spell in pok in charge of loosely-coupled architecture and originated peer-coupled shared data ... which was the basis of a number of things, inlucding ims hot standby (although i kid her that while she was writing documents, I was helping deploy what was considered the largest single system imfrastructure anywhere up until that time) as well as doing some "bullet-proof" i/o subsystem for the guys over in the disk engineering lab.

random ref:
http://www.garlic.com/~lynn/97.html#14
http://www.garlic.com/~lynn/99.html#31

in the late '80s when we were running skunk works responsible for ha/cmp we got a lot of push back from pok, rochester, vms crowd and some number of others (even some very prominant in current "open" cluster genre) that it was not practical to improve reliability and availability for "open" platform systems.

random refs
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/2001c.html#69

although we did identify some issues .... like some C language library conventions and the vast differences between operating systems designed for interactive/end-users and operating systems designed for batch/applications (n part having worked on automated operator applications in the early to mid-70s).

can even claim the simplicity & ease of use of some of the C language library conventions (namely implicit lengths) has been one of its great difficiencies.

random refs
http://www.garlic.com/~lynn/aadsm5.htm#asrn4
http://www.garlic.com/~lynn/aadsm5.htm#asrn1

One of the things we did in the mid-90s working on various infrastructure deployments was noting that while up-front costs for deploying a web-server in closet was relatively low ... that the scale-up costs increased much faster than some of the industrial strength platforms .... and total costs definately crossed over for the top-tier servers that represented possibly 70-80% of web traffic i.e. that server and client platforms don't have to be symmetrical/homogeneous, and that the human element requirement of many of the platforms that originated out of an "interactive" orientation also had significant human element care & feeding scaling issues (not just reliability and availability but also significant cost issues trying to scale up).

random ref:
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

long ago and far away ... I claimed that the significant market penetration of unix was its relatively low costs for manufactures entering the processor market. At least by 1980, the cost of computer hardware development had come down so significantly that it was starting to allow a number of hardware vendors to enter the mini & workstation market. However, the cost of developing a proprietary operating system remained prohibitatively expensive. The demonstrated portability ease of moving UNIX to a new platform became an attractive cost alternative to such enterprises (compared to developing a proprietary operating system from scratch) i.e. delivering an operating system for the platform didn't cost more than the whole rest of the platform delivery effort combined.

random ref:
http://www.garlic.com/~lynn/99.html#222
moved to
http://vm.marist.edu/~piper/party/jph-12.html#wheeler

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Thu, 19 Apr 2001 15:57:43 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
That's why I was claiming that comparing the instructions per megabyte of I/O on an old mainframe were memory was closer (measured in CPU cycles) than L1 cache is in today's processors versus a current Pentium and claiming a value of 2000 mips based on 1GHz 2 IPC average is silly. If you were comparing something that takes place entirely within cache on a modern CPU, then I'd agree that the 2000 mips figure was justified for that case.

well many vintage operating systems ran in memory configuration the size of today's cache.

i claimed that I had an application that scaled fairly linearly from p-pro 200 to 1ghz. It doesn't completely reside in cache, but it does have relatively high cache hit ratio and there is no floating point ... which is representative of large amount of data processing, file activity.

so for the hypothetical 1ghz 2000 mips number ... divide all the numbers by some value ... 2?, 4?, 5? 10? we may actually be in violent aggreement ... in part because the processor thruput has increased so much faster than the i/o thruput has increased (a trend that has been going on since at least the late '60s) ... that the nature of what is done on the processors has had to change (human nature that attempts to fill a vacuum?)

aka ... if running an appication that had mips/mbyte-io ratios of the '60s ... the processors would only have trivial cpu utilization ... depending on the mip rate valute used that might be 5 percent or .05 percent (three orders of magnitude difference but all still trivial).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Rational basis for password policy?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rational basis for password policy?
Newsgroups: comp.security.misc
Date: Thu, 19 Apr 2001 16:05:31 GMT
Gunther Schadow writes:
This doesn't answer my problem. I need to find rational foundation of password guidelines (length, forbidden sequences, change frequency, etc.) I am beginning to suspect that there is no such rationale and all these guidelines are simply like grandma's cookbook recipes. Pretty week basis for causing so much grief to our users. Anyone feel challenged? Please?

possibly OT refs:
http://www.garlic.com/~lynn/2001d.html#52
http://www.garlic.com/~lynn/2001d.html#51
http://www.garlic.com/~lynn/2001d.html#62

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Thu, 19 Apr 2001 16:22:24 GMT
Anne & Lynn Wheeler writes:
i claimed that I had an application that scaled fairly linearly from p-pro 200 to 1ghz. It doesn't completely reside in cache, but it does have relatively high cache hit ratio and there is no floating point ... which is representative of large amount of data processing, file activity.

a trivial example of thruput and cache effects (all integer)

in the description of the following "routes" for res system recently posted to this ng on different thread

http://www.garlic.com/~lynn/2001d.html#69

i used a fairly large in-memory data structure ... and I got a factor of 3 times difference in total cpu utilization by changing the data structure from an extremely straight-foward design to one that was tuned for storage/cache access patterns (things were still larger than cache ... but cache miss dropped significantly).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

April Fools Day

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: April Fools Day
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2001 17:18:15 GMT
jchausler writes:
This reminds me of the old story about the guy crossing to border everyday with a wheelbarrow full of sand. Every day the customs officers would carefully search through the sand to make sure he wasn't trying to sneak out anything hidden underneath the sand. Of course, what he was smuggling out was wheelbarrows....... (Of course, no one asks the question why anyone would want to smuggle wheelbarrows..........)

there is tale from some toronto data center with somebody walking out with a (removeable) mainframe disk pack (when removed they were held in clear plastic container). the guard at the door asked him for a pass and the person held the pack up so the guard could look at the pack edgewise in the light and said "its ok, see its empty" (aka there was nothing between the platters).

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Changing IP addresses, was: Carrying authentication information between Web applications

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Changing IP addresses, was: Carrying authentication information  between Web applications
Newsgroups: comp.security.misc
Date: Thu, 19 Apr 2001 17:27:25 GMT
Lassi Hippeläinen writes:
DHCP is often used to configure a host, but then the IP address should remain the same while the dial-up session lasts. If the DHCP lease expires in the middle of a session, something has been configured wrong.
... ARP timeout question from comp.security.firewalls thread

http://www.garlic.com/~lynn/2001c.html

everything is relative ... if an ISP kicks people off after being on a couple hrs and/or drops the line if inactive for more than 10 minutes ... then the avg connect time would likely be shorter than the IP address time-out interval.

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/

Pentium 4 Prefetch engine?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pentium 4 Prefetch engine?
Newsgroups: comp.arch
Date: Fri, 20 Apr 2001 16:03:21 GMT
Konrad Schwarz writes:
Historical note: accessing files via memory mapping was one of the central architectural design points in Multics (I don't have the reference handy, but its Joel somebody's dissertation at MIT (now a professor there), Traffic Control in so and so).

I don't have a reference at all for the following, but I believe Thompson & Ritchie explicitly decided to do the open/seek/read/write/close model after Bell Labs withdrew from the Multics project (apart from the fact that a 16-bit address space is kind of small).


i had also done transparent memory-mapping for CMS when I was at 545 tech sq. in the early '70s. This was as much from TSS as it was Multics. The issue was that CMS had basic I/O buffer paradigm so some of the memory mapping was similar to the SGI O_Direct ... i.e. (page) alignment restrictions (when i/o address was page aligned and at least page length ... it would handle dangling page fragments with intermediate buffer and move).

The issue with TSS having all memory-mapped was the relative large size of data compared to real storage sizes ... and its poor handling of sequential access patterns ... i.e. TSS would trivially page thrash with sequential access to memory-mapped locations ... the CMS implementation could do full memory mapping with somewhat better page thrashing control ... but also do large buffer page-mapped sequences with a lot of performance advantage over traditioanl I/O but the trade-off with explicit read/write paradigm in the application was the better explicit hints regarding things like sequential and/or "weak" access patterns (i.e. when read/write referred to previously used virtual address it was easily understood that application moved out of the "previous" data and into the "new" data).

I got to install on possibly several hundred "internal" production machines (much larger than the total Multics install base) but didn't actually ship to customers except on something called XT/AT//370 platform.

random refs to some of that internal install base:
http://www.garlic.com/~lynn/99.html#112
http://www.garlic.com/~lynn/99.html#126

The general TSS install base (non-internal & non-AT&T) approached the size of total Multics install base ... but details are much less well known, in part because the other operating systems on the platforms so dominated the total industry. However, AT&T did do a "unix" port to TSS (i.e. unix running on top of TSS) and just the AT&T install base was probably larger than the total Multics install base.

random refs:
http://www.garlic.com/~lynn/2000.html#1
http://www.garlic.com/~lynn/2000.html#64

--
Anne & Lynn Wheeler | lynn@garlic.com - http://www.garlic.com/~lynn/


next, previous, index - home