List of Archived Posts

2004 Newsgroup Postings (10/04 - 10/18)

Specifying all biz rules in relational data
Tera
couple recent NIST drafts
Specifying all biz rules in relational data
REVIEW: "Biometrics for Network Security", Paul Reid
Tera
a history question
Whatever happened to IBM's VM PC software?
Whatever happened to IBM's VM PC software?
REVIEW: "Biometrics for Network Security", Paul Reid
Whatever happened to IBM's VM PC software?
Whatever happened to IBM's VM PC software?
How can I act as a Certificate Authority (CA) with openssl ??
Whatever happened to IBM's VM PC software?
computer industry scenairo before the invention of the PC?
computer industry scenairo before the invention of the PC?
computer industry scenairo before the invention of the PC?
mainframe and microprocessor
Whatever happened to IBM's VM PC software?
computer industry scenairo before the invention of the PC?
Whatever happened to IBM's VM PC software?
computer industry scenairo before the invention of the PC?
Lock-free algorithms
Help! I'm trying to understand PKI - especially CA's role
IBM Spells Out Mainframe Strategy
Shipwrecks
Shipwrecks
Shipwrecks
Shipwrecks
Shipwrecks
Shipwrecks
Shipwrecks
MS Corporate. Memory Loss is Corporrate Policy ?
Shipwrecks
tracking 64bit storage
Shipwrecks
Multi-processor timing issue
Multi-processor timing issue
Multi-processor timing issue
Multi-processor timing issue
Result of STCK instruction - GMT or local?
EAL5
Auditors and systems programmers
Multi-processor timing issue
Multi-processor timing issue
Multi-processor timing issue
Shipwrecks
IBM Open Sources Object Rexx
Shipwrecks
EAL5
EAL5
stop worrying about it offshoring - it's doing fine
merged security taxonomy and glossary updated
4GHz is the glass ceiling?
Shipwrecks
ZeroWindow segment
RISCs too close to hardware?
Shipwrecks
Shipwrecks
RISCs too close to hardware?
Shipwrecks
RISCs too close to hardware?
RISCs too close to hardware?
RISCs too close to hardware?

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Mon, 04 Oct 2004 12:13:24 -0600
for even more rdbms drift, as part of the ha/cmp work
https://www.garlic.com/~lynn/subtopic.html#hacmp
we did a distributed lock manager. minor distributed/cluster database scale-up reference
https://www.garlic.com/~lynn/95.html#13

old-time mainframe clusters were called "loosely-coupled" (contrast to "tighly-coupled" or smp), misc references
https://www.garlic.com/~lynn/submain.html#shareddata

somewhat related, there has also been a smp locking/serialization thread running recently in comp.arch
https://www.garlic.com/~lynn/2004l.html#55 Access to AMD 64 bit developer centre
https://www.garlic.com/~lynn/2004l.html#57 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#59 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#66 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#67 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#68 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#77 Tera

random past dlm posts:
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#2 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#17 Dealing with complexity
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tera

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tera
Newsgroups: comp.arch
Date: Mon, 04 Oct 2004 17:01:17 -0600
"Stephen Fuld" writes:
Well, certainly the poorer ones did. However, I worked on an OS - we didn't call them kernel then :-) in the early 1970s which was smp capable and had no global kernel spin lock and had been that was since at least the late 1960s. The relevant data structures were protected by locks, but it was entirely possible for the different parts of the OS to be executing simultaneously on multiple processors. For example, one processor making memory allocation decisions, another doing I/O and a third deciding which user task to dispatch next. It just required careful management of the locks. And most of the locks weren't really "spin locks", but used a spin lock to control access to a software controller lock queuing methodology. That is, you had the hardware lock only long enough the check, get the software lock, and if you blocked on the software lock, you got queued. It worked quite well up to a handfull of processors, which was the limit of the hardware technology at the time.

charlie had done fine-grain locking for cp/67 kernel and that is what prompted his invention of compare&swap (mnemonic chosen because it is charlie's initials) that was included in 370 machines.

360/67 just had TS ... and so there were a lot of spin-locks and release to even do simple things like increment counters or do simple list maintenance.

i was referring to initial pass on a kernel that had no smp provisions and faced with trade-off between full-blown fine-grain locking effort and the simple, kernel spin-lock implementation. the bounce lock effort required doing fine-grain locks on anything that the dispatcher and the initial interrupt routines might touch as well as things that might impact a currently running application (say like an asynchronous i/o interrupt on behalf of currently running application, or stealing unreferenced pages from a running application).

The result was that only kernel modules ... totaling possibly 6000 lines of code were then involved ... rather than doing fine-grain locking thru-out the kernel (which ran to a couple hundred thousand lines of code). The parallelized code tended to also be highest used code paths ... so, in effect, while only a small part of the kernel was parallelized, it tended to be very high use.

As mentioned, because of (kernel) cache locality, it could actually achieve higher multiprocessor thruput than if the much more laborous process had been done for fine-grain parallelization thruout the whole kernel; aka the bounce lock avoided the kernel spin-lock effects while the queueing mechanism tended to batch activity thru the processor currently executing in the kernel.

misc smp and/or compare and swap postings
https://www.garlic.com/~lynn/subtopic.html#smp
misc bounce lock postings
https://www.garlic.com/~lynn/submain.html#bounce

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

couple recent NIST drafts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: couple recent NIST drafts
Newsgroups: comp.security.misc
Date: Mon, 04 Oct 2004 17:15:19 -0600
September 29, 2004 -NIST Special Publication 800-52, Guidelines on the Selection and Use of Transport Layer Security
http://csrc.nist.gov/publications/drafts.html#sp800-52

September 28, 2004 - NIST Special Publication 800-53 (Second Public Draft), Recommended Security Controls for Federal Information Systems
http://csrc.nist.gov/publications/drafts.html#sp800-53

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Specifying all biz rules in relational data

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Specifying all biz rules in relational data
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Mon, 04 Oct 2004 18:32:49 -0600
"Dawn M. Wolthuis" writes:
Good info. I could google around myself, but do you happen to know if Tex fits into this anywhere? Did it arise separately or is it related to cms script and runoff? Thanks. --dawn

references has knuth doing it circa '78 for typesetting ... i.e. controlling format of the presentation. this is in the same genre as runoff/script which was controlling the format. gml in '69 as generalized markup language ... comes from the initials of the last name of the three inventors at science center (aka "g", "m", and "l"). try search engine for combinations of things like tex, knuth, history. fairly representative version:
http://www.earlham.edu/~thompsk/final_project/latex/node4.html

however, gml was evolving into generalized description of the information, not the specification of the format of the display .... i.e. separate the description of the information and the specification for the display of the information. having self-describing information then enables features other than just dictating how that information might be displayed.

so some other random lore, the chair of x9a10 ... working group for x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959

had been one of the edi/x12 standards authors. one of the things focused on with edi/x12 was canonical format conversion for transport to allow different operations to exchange information.

another player in the interoperable transport conversion sphere has been ASN.1

by contrast, the sgml/xml scenario has the information in its "native" form be self-describing and that all applications (regardless of the location or operation) be able to utilize the information (self describing is not just for display purposes). this promotes COTS application interoperability .... as opposed to every operation developing roll-your-own applications and data formats (which then requires things like edi format conversation processes for transport and interoperability). in that sense there is some common objectives for both sgml/xml and various dbms with achieving common use of the same information by a variety of different applications.

so other folklore ... interaction between cluster dbms scale-up mentioned in prior post
https://www.garlic.com/~lynn/2004m.html#0

and electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REVIEW: "Biometrics for Network Security", Paul Reid

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REVIEW: "Biometrics for Network Security", Paul Reid
Newsgroups: misc.books.technical,comp.security.misc,alt.computer.security
Date: Mon, 04 Oct 2004 19:51:04 -0600
Bruce Barnett <spamhater103+U041004200405@grymoire.com> writes:
Well, a smartcard can be used without a central server. I've been using the open source musclecard applet to do so. The java code in the card generates a key pair, and exports the private key. The public key can be stored in a local machine's cache/storage (especially if the user is a frequent user).

The host generates a random challange, and asks the card to encrypt it with the private key. The card does so, and the host verifies the ID and grants access. (Once the PIN is verified).


in general private key encryption ... as in some form of digital signature ... whether of a challenge of some other form of data ... tends to either be something you know or something you have authentication, aka from 3-factor authentication
something you know
something you have
something you are


the correpsonding public key is registered with the relying party (central authority, your local pc, etc) and the key-owner keeps the private key in an encrypted software file or in a hardware token.

if the convention has the key-owner keeping the private key in an encrypted file (say like some of the pkcs12 or other browser conventions) ... then the relying party when it sees a valid digital signature ... can assume that the key-owner had supplied the correct pin to decrypt the software file in order that the digital signature be performed.

the private key can be kept in a hardware token, and when a relying party sees a valid digital signature, they can assume something you have authentication on behalf of the key owner.

there are some hardware tokens that are constructed so that the private key operations (encryption and/or digital signature) are only performed when the correct PIN and/or biometric is presented ... i.e. two factor authentication
something you have
something you know (pin) or • something you are (biometric)


it is possible to construct a hardware token where three factor authentication might be assumed ... where both a PIN and the correct biometric is required for the token to do its job. then the relying party might presume three factor authentication
something you know (pin/password)
something you have (hardware token)
something you are (biometric)


in this case, the relying party (central authority, your local pc, kerberos and/or radius service, etc) could reansonably expect to have
1) the public key registered,
2) the integrity characteristics of the public key registered,
3) the hardware integrity characteristics of the hardware token registered
4) the operational integrity characteristics of the hardware token registered


so that when the relying party sees a digital signature for verification, it has some reasonable level of assurance as to what the verification of such a digital signature might mean (and how much it might trust such a digital signature as having any meaning).

for a relying party to get a digital signature and be able to verify that the digital signature is correct .... w/o additional information the relying party has absolutely no idea as to the degree or level of trust/assurance such a digital signature means.

somewhat orthogonal and has frequently thoroughly obfuscated the issues about the level of trust/assurance that a relying-party might place in a digital signature are digital certificates.

digital certificates were originally invented for the early '80s offline email environment. the recipient (aka relying party) gets a piece of email and has no way of proving who the sender was. so the idea was to have the sender digitally sign the email. if the sender and recipient were known to each other and/or had previous interaction, the recipient could have the sender's public key on file for validating the digital signature.
https://www.garlic.com/~lynn/subpubkey.html#certless

however, there was a theoritical offline email environment from the early '80s where the sender and the recipient had absolutely no prior interactions and the desire was to have the email be processed w/o resorting to any additional interactions. this led to the idea of 3rd party certification authorities who would certify as to the senders identity. the sender could create a message, digital sign it and send off the message, the digital signature and the 3rd party certified credential (digital certificate). the recipient eventually downloads the email, hangs up, and has absolutely no recourse to any additional information (other than what is contained in the email).

by the early '90s, this had evolved into the x.509 identity (digital) certificate. however, during the mid-90s, this became serverely depreciated because of the realization about the enormous liability and privacy issues with arbritrarily spewing x.509 identity certificates all over the world. there was some work on something called an abbreviated relying-party-only digital certificate ... that basically contained only a public key and some form of account number. random past relying-party-only posts:
https://www.garlic.com/~lynn/subpubkey.html#rpo

the relying party would use the account to look-up in some sort of repository the actual information about the sender ... as opposed to having the liability and privacy issues of having the sender's information actually resident in the certificate. however, in the PGP model as well as all of the existing password-based authentication schemes ... it was possible to show that whatever respository contains information about the sender ... could also contain the actual sender's public key. In the widely deployed password-based schemes like RADIUS, Kerberos, PAM, etc ... just substitute the registration of a password and permissions with the registration of a public key and permissions. So it was trivial to show that for all of the replying-party-only certificate scenarios that the actual certificate was redundant and superfluous.

of course, the other issue with the original design point for digital certificates of the early '80s offline email paradigm where a sender and a recipient that had absolutely no prior interaction and the recipient had absolutely no other recourse for obtaining information about the sender .... had pretty much started to disappear by the early 90s.

and of course, the issue of certificates being redundant and superfluous and possibly representing severe liability and privacy issues ... the certificates didn't actually contribute to telling the recipient (or relying party) to what degree that they could actually trust a possible digital signature aka the issue of what the relying party can infer from validating a digital signature .... does it represent anything from three factor authentication
something you knowsomething you havesomething you are

and say if it might actually be associated with something you have hardware token .... what level of assurance is associated with a specific hardware token.

the issue in which a possible digital signature (or other private key operation) is performed .... like biometric sensor inferface and/or hardware token pin/password interface ... is also of some possible concern to a recipient or relying party. one of the scenarios for FINREAD terminal is to possibly have the terminal also digitally sign transactions .... so the relying party has some additional idea about the level of trust that they can place in what they have recieved. (not only was a certified FINREAD terminal used, but the transaction carries the digital signature of the FINREAD terminal).

misc. past FINREAD terminal posts
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
https://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication white paper
https://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
https://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication white paper
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has conspicuously failed to fix
https://www.garlic.com/~lynn/aadsm15.htm#38 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#9 example: secure computing kernel needed
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
https://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002n.html#26 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#67 smartcard+fingerprint
https://www.garlic.com/~lynn/2003h.html#25 HELP, Vulnerability in Debit PIN Encryption security, possibly
https://www.garlic.com/~lynn/2003h.html#29 application of unique signature
https://www.garlic.com/~lynn/2003j.html#25 Idea for secure login
https://www.garlic.com/~lynn/2003m.html#51 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003o.html#29 Biometric cards will not stop identity fraud
https://www.garlic.com/~lynn/2003o.html#44 Biometrics
https://www.garlic.com/~lynn/2004.html#29 passwords
https://www.garlic.com/~lynn/2004i.html#24 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#27 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004j.html#1 New Method for Authenticated Public Key Exchange without Digital Certificates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tera

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tera
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 05 Oct 2004 08:10:10 -0600
Jan Vorbrüggen <jvorbrueggen-not@mediasec.de> writes:
Was that really so? When I learnt that, e.g., Solaris had this problem in its first SMP incarnation, I was quite surprised - I grew up on VMS, and when it went SMP, the very first implementation had a defined hierar- chical lock tree. Of course, VMS was multi-threaded in the kernel from day 1, so that might have made a lot of difference compared to typical early Unix implementations.

os/360 on 360/65 SMPs from the 60s had global kernel spin-lock

tss/360 and cp/67 on 360/67s had multi-threaded kernel locking (and charlie's work on cp/67 fine-grain locking was one of the things that led to his invention of compare&swap).

a lot of things were dropped in the morph of cp/67 to vm/370 ... including many things that were in the cp/67 kernel that i had originally done as undergraduate (and there wasn't any smp support).

the os/360 genre continued smp global kernel spin-lock for some time. this is probably part of the reason when the original compare&swap instruction was presented to the hardware architecture owners in pok, they came back with they couldn't justify adding an instruction that was purely for smp support ... and it would be necessary to invent scenarios justifying the instruction based on non-smp use. the result was the programming notes showing use of compare&swap in (smp or non-smp) multi-threaded application scenarios (i.e. application could avoid kernel calls in order to serialize execution thru application critical paths).

in the mid-70s, about the same time i was working on stuff for the resource manager (put a lot of performance stuff that i had done on cp/67 back into vm/370), i was also doing this stuff on something called ecps (which involved dropping some amount of kernel pathlength into microcode of machines) and an smp project called VAMPS.

couple recent threads in a.f.c where some of the resource manager stuff was discussed
https://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#70 computer industry scenaio before the invention of the PC?
and x-post between a.f.c. and comp.databases.theory
https://www.garlic.com/~lynn/2004l.html#72 Specifying all biz rules in relational data

collection of VAMPS and/or bounce lock postings
https://www.garlic.com/~lynn/submain.html#bounce

and for some total drift, recent post on distributed lock manager in x-post between comp.databases.theory and a.f.c.
https://www.garlic.com/~lynn/2004m.html#0 specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#3 specifying all biz rules in relational data

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

a history question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: a history question
Newsgroups: alt.folklore.computers,comp.lang.fortran
Date: Tue, 05 Oct 2004 09:47:30 -0600
"Tom Linden" writes:
Not sure what xlf is. The 370 compilers I thought were all done at St. Theresa labs and I thought they had written all their compilers in PL/S and PL/I, but you are probably better informed than I.

stl wasn't opened until the late '70s. it was originally going to be called the coyote labs ... after the closest post office (which was an easy default naming solution).

the week before the (stl) opening, i happened to be in DC for some stuff ... and i was hoping to visit the new smithsonian science & aerospace museum ... but it turned out to be a week or two from opening.

in any case, there was a certain sanfran working ladies organization demonstrating on the steps of congress (which got a lot of press) ... and there was a decision made to change the name of the new lab that was opening the following week (so it took its name from one of the closest cross streets).

a lot of the compiler work had been done at ???? ... i have some vague recollection it being called Times something or other. That center was closed in the mid-70s ... just before the burlington mall center was closed (have some vague recollection that the same person put in charge of shutting the Times something or other center was then put in charge of shutting the burlington mall center).

for a while there was a fortran Q available internally ... which was a significant number of performance enhancements done to fortran h by the palo alto science center (possibly somewhat because of their relationship to SLAC which was just down the street) .... i think that eventually made general availability as fortran hx.

after stl labs opened they got various database and compiler missions (as well as apl ... the apl work also done at the palo alto science center ... and earlier at the cambridge science center shifted to stl).

there was something of disagreement between STL which had gotten IMS and physical databases .... and SJR ... where the original relational database was done
https://www.garlic.com/~lynn/submain.html#systemr

as a result the technology transfer of system/r was from sjr to endicott for sql/ds.

one of the people in the following meeting
https://www.garlic.com/~lynn/95.html#13

claimed to have been the primary person at STL for the later sql/ds technology transfer from endicott (back? ... stl and sjr were less than 10 miles apart, during the day, i was sometimes even known to ride my bike between sjr and stl) to stl for what became db2.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 05 Oct 2004 21:25:34 -0600
"John F. Regus" writes:
IBM at one time offered VM/PC software. It ran on an AT machine if I remember. It did not require a SYS/370 card. It had full CP and CMS functions as well as creating mini-disks on the hard drive (very easy since VM always ran best on FBA devices and a PC drive is essentially a FBA device.

Whatever happened to this software? It sure would be a good replacement for Linux, Unix (any of the PC flavors), and since it ran on a x86 processor, maybe some MS applications could run on it.

Anybody at IBM know? Please put another VM PC product on the market!


it had a custom cp kernel that ran on a stripped down 370 hardware implementation. the cp kernel used interprocessor communication to talk to a software multitask monitor on the pc called cp/88 for doing all i/o. it used a fairly normal cms. note that from the very beginning with the origins of cp/40 (on custom modified 360/40 with virtual memory hardware) both cp and cms effectively always used logical "FBA" support ... even when mapped to ckd devices.

it was originally introduced on xt/pc ... that had the old 10mbyte hard disks with 100ms access times.

there were later hardware offerings with full 370 hardware implementation that could also directly do their own i/o ... and there was much less requirement for custom cp kernel. one of the first of the full 370 implementations was the a74 (official name ibm 7437 vm/cp technical workstation) ... although i did the modifications for both pc/370 and a74 that had improved page replacement algorithm as well as page-mapped filesystem support for cms (i.e. both the cp and cms changes). misc. past cms page-mapped posts
https://www.garlic.com/~lynn/submain.html#mmap
following post includes list of modifications for a74 cp kernel:
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions

thread discussing some of the history and various products
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home

misc other past a74 posts
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2003m.html#15 IEFBR14 Problems

random old washington, pc/370, xt/370, at/370 posts
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#53 S/370 PC board
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#44 PC/370
https://www.garlic.com/~lynn/2001n.html#49 PC/370
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2003e.html#3 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2003p.html#32 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 05 Oct 2004 21:46:14 -0600
and a little topic drift .... dept. a74 in pok, responsible for the a74 technical workstation, .... was also responsible for the 3277ga ... an adapter on the side of a 3277 that hooked up a tektronix display.

random past posts mentioning 3277ga
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001i.html#51 DARPA was: Short Watson Biography
https://www.garlic.com/~lynn/2002p.html#29 Vector display systems
https://www.garlic.com/~lynn/2004l.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#32 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REVIEW: "Biometrics for Network Security", Paul Reid

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REVIEW: "Biometrics for Network Security", Paul Reid
Newsgroups: misc.books.technical,comp.security.misc,alt.computer.security
Date: Wed, 06 Oct 2004 10:33:21 -0600
followup
https://www.garlic.com/~lynn/2004l.html#4

note that while three factor authentication
something you knowsomething you havesomething you are

allows pin/passwords as something you know authentication, there can be a big different between something you know as a shared-secret and something you know as a non-shared-secret.

for instance the current payment card scenario effectively has account numbers as shared-secrets ... since gaining knowledge of the account number can enable fraudulent transactions. harvesting of merchant transaction files can result in account/identity theft impact because of the ability to use the account numbers for fraudulent transactions. some related discussion of security proporitional to risk
https://www.garlic.com/~lynn/2001h.html#61
misc. past postings about secrets and account numbers
https://www.garlic.com/~lynn/subintegrity.html#secrets
and posts on account number harvesting
https://www.garlic.com/~lynn/subintegrity.html#harvest

where there is a big focus on protected all occurances of account number because of its shared-secret vulnerability. an alternative solution is x9.59
https://www.garlic.com/~lynn/x959.html#x959

where financial transactions are digitally signed and there is a business rule that account numbers used in x9.59 transactions can't be used in non-authenticated transactions. as a result, just knowing an account number used in an x9.59 doesn't enable fraudulent transactions (or account/identity theft) and therefor such account numbers no longer needs to be considered as a shared-secret.

one of the requirements of shared-secret based infrastructure (in addition to the requirement to needing to protect the shared-secret) is frequently to require a unique shared-secret for different security domains .... aka ... the password on file with your local garage ISP should be different than passwords used for personal banking or for you job. The issue is that for different security domains ... they may have different levels of protection for shared-secrets. there may also be instances where one security domain may be at odds with some other security domain.

In effect, anything that is on file in a security domain ... and just requires reproducing the same value for authentication can be considered a shared-secret. shared-secret passwords frequently also have guidelines regarding frequently changes for the shared-secret. some past references to password changing rules:
https://www.garlic.com/~lynn/2001d.html#52
https://www.garlic.com/~lynn/2001d.html#53
https://www.garlic.com/~lynn/2001d.html#62

A something you have hardware token can also implement something you know two-factor authentication, where the something you know is a non-shared-secret. The hardware token contains the secret and is certified to require the correct secret entered for correct operation. Since the secret isn't shared ... and/or on file with some security domain, it is a non-shared-secret ... rather than a shared-secret.

A relying party needs some proof (possibly at registration) that authentication information (like a digital signature) is uniquely associated with a specific hardware token and furthermore needs certified proof that a particular hardware token only operates in a specific way when the correct password has been entered .... to establish trust for the relying party that two-factor authentication is actually taking place. In the digital signature scenario, based on certificate of the hardware token, the relying party when it validates a correct digital signature then can infer two-factor authentication:
• something you now (password entered into hardware token) • something you have (hardware token generated digital signature)

In a traditional shared-secret scenario, if a shared-secret has been compromised (say merchant transaction file has been harvested), new shared-secrets can be issued. Typically, there a much fewer vulnerabilities and different threat models for non-shared-secret based infrastructures compared to shared-secret based infrastructures (in part because of possible greater proliferation of location of shared-secrets).

It turns out that something you are biometrics can also be implemented as either a shared-secret infrastructure or a non-shared-secret infrastructure. Biometrics typically is implemented as some sort of mathematical value that represents some biometric reading. In a shared-secret scenario, this biometric mathematical value is on file someplace, in much the same manner that a password might be on file. The person is expected to reproduce the biometric value (in much the same way they might be expected to reproduce the correct password). Depending on the integrity of the environment that is used to convert the biometric reading to a mathematical value ... and the integrity of the environment that communicates the biometric value, a biometric shared-secret imfrastructure may be prone to identical vulnerabilities as shared-secret password systems ... aka somebody havests the biometric value(s) and is able to inject such values in to the authentication infrastructure to spoof an individual.

Many shared-secret biometric infrastructures with distributed sensors that might not always be under armed guards ... frequently go to a great deal of trouble specifying protection mechanisms for personal biometric information. One of the issues with respect to shared-secret biometric infrastructures compared to shared-secret password infrastructures (with regard to secret compromise), is that it is a lot easier to replace a password than say an iris or a thumb.

There are also hardware tokens that implement non-shared-secret biometrics, in much the same way that non-shared-secret passwords are implemented. Rather than having the biometric values on file at some repository, the biometric value is contained in a personal hardware token. The personal hardware token is certified as performing in a specific manner only when the correct biometric value is entered. Given adequate assurance about the operation of a specific hardware token, a relying party may then infer from something like validating a digital signature that two-factor authentication has taken place, i.e.
something you have (hardware token that uniquely generates signature) • something you are (hardware token requiring biometric value)

biometric values are actually more complex than simple passwords, tending to having very fuzzy matches. for instance an 8-character password either matches or doesn't match. A biometric value is more likely to only approximately match a previously stored value.

Some biometric systems are frequently designed with hard-coded fuzzy match threshhold values .... say like a 50 percent match value. These systems frequently talk about false positives (where a 50 percent match requirement results in authenticating the wrong person) or false negatives (where a 50 percent match requirement results in rejecting the correct person). Many of these systems tend to try and adjust their fuzzy match value settings in order to minimize both the false positives and false negatives.

in value-based systems, hard-coded fuzzy match values may represent a problem. an example is transaction system that supports both $10 transactions and million dollar transactions. In general, a risk manager may want to have a higher match requirement for higher value transactions (security proportional to risk)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 06 Oct 2004 10:41:59 -0600
glen herrmannsfeldt writes:
It did have special hardware to go along with it, a two board set. One board with all the processors (three), one with memory.

Maybe slower than a 360/40. I have run PL/I (F) on an AT/370, and it takes about five minutes to compiler or run the simplest program.


it wasn't just the processor .... cms infrastructures tended to be relatively memory and disk intensive (at least by pc standards of the early 80s).

i got blamed for delaying the xt/370 first customer ship for several months ... by showing that its 384k (370) memory would page-trash in large number of typical applications (like many of the compilers). they then retrofitted an extra 128k to the 370 side ... to bring the (370) memory side. even at 512k, there were still a number of things that would effectively page thrash (remember that the fixed cp kernel also came out of that 512k).

page thrashing was then severely exhauserbated by the incredably slow disks on the pc (which were also typically shared for both various cp functions and cms filesystems).

the incredably slow disks also exhauserbated the thruput of cms filesystem and many cms applications (compared to real 370) ... pl/i could have scores of different module loads to complete a compile.

the a74 came with faster processor ... but also 4mbytes of (370) memory and typical configuration had much faster disks.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 06 Oct 2004 14:43:35 -0600
glen herrmannsfeldt writes:
It may have been one of the first CMS to support 512, 1024, or 2048 byte blocks. VM/370's CMS uses 800 byte blocks.

chris stephenson "EDF" filesystem was shipped in standard cms several years before xt/370 ... random past posts mentioning edf filesystem (1024, 2048 & 4096 byte blocks; or clusters in dos terms?)
https://www.garlic.com/~lynn/2001c.html#76 Unix hard links
https://www.garlic.com/~lynn/2001m.html#57 Contiguous file system
https://www.garlic.com/~lynn/2001m.html#58 Contiguous file system
https://www.garlic.com/~lynn/2002d.html#5 IBM Mainframe at home
https://www.garlic.com/~lynn/2002q.html#25 Beyond 8+3
https://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2004d.html#16 IBM 360 memory

mapping the original cms filesystem (with 800 byte blocks from the mid-60s) to page mapped (4096 byte "pages") was somewhat harder than page mapping edf filesystem (using 4096 byte block operation). misc. stuff on page mapped filesystem for cms that i did late in the cp/67 lifecycle and then ported to vm/370
https://www.garlic.com/~lynn/submain.html#mmap

it was part of a large set of stuff that i had done late in the cp/67 life cycle related to all sort of virtual memory management ... including a bunch of stuff having to do with sharing memory segments. a subset of the shared memory segments was picked up (both cp kernel changes and cms changes) and released as something called DCSS in vm/370 release 3. however, since the filesystem page mapped changes weren't picked up ... a special hack had to be done to create saved page image of the memory segments ... and then be able to later retrieve them. It was scaffolded off the side of the existing cp kernel "ipl-by-name" facility (as opposed to being able to directly retrieve page image from the cms filesystem). part of this is also discussed in various threads about adcon dependency in code that could appear at arbritrary virtual address ... especially when it is the same identical, shared copy of code ... at potentially different virtual addresses in different address spaces.
https://www.garlic.com/~lynn/submain.html#adcon

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How can I act as a Certificate Authority (CA) with openssl ??

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How can I act as a Certificate Authority (CA) with openssl ??
Newsgroups: sci.crypt,alt.apache.configuration,comp.security.unix
Date: Wed, 06 Oct 2004 15:00:58 -0600
see_my_signature_for_my_real_address@hotmail.com (Dr. David Kirkby) writes:
I (name David) want to put a secure web server up for a friend (Paul) to he can access some documents securely for himself, no matter where in the world he is. Whilst signing the certificate myself (saying I'm Micky Mouse if I want) is okay for our purposes, I'd like (just out of interest) to know how to be a Certificating Authority (CA). Somehow I don't think I will put Verisign out of bussiess, but I'm interested in the prccess.

the whole point of a digital certificate is so that a relying party that otherwise doesn't have any knowledge about the originating party ... can equate the originating party's public key with something about the originating party.

normally in public key infrastructures where there is some relationship and knowledge that exists between the relying party and the originating party .... the relying party has a table of public keys with information about their respective owners. This is effectively the PGP model. It has also been the standard industry business model for userid/password authentication system for scores of years ... and can be applied to public key infrastructures by replacing the "password" in the authentication tables with public keys (aka radius, kerberos, password tables, etc).

in a certification environment ... the relying party's public key tables, instead of containing the public keys and information directly about originating parties .... contains public keys and information about certification authorities (and the relying party has absolutely no way of obtaining information directly about the originating party).

the relying party is totally dependent upon and trusts the certification authority for providing information about the originating party in the form of digital certificates.

to be a certification authority there then are at least requirements for

1) manufacturing the digital certificates 2) establishing trust with the relying parties (who are dependent on the certification authority for supplying the information about the originating party) 3) loading the certificate authority's public key in the relying parties authentication tables

i.e. the relying parties have to trust the certification authority to provide the information about the originating party (in lieu of the relying party having the information directly about the originating party) and the relying parties have to have the certification authorities public key loaded into their authentication table (in lieu of directly loading the public key of the originating parties in their authentication table).

in the past, there has been mention of PKIs ... where the certification authority both manufacture certificates for arbritrary distribution and provide a mechanism for managing those certificates.

many of the infrastructure are purely certificate manufacturing operations as opposed to real PKIs ... and, in fact, I coined the term certificate manufacturing in the mid-90s to differentiate from true PKIs.

various postings about SSL-specific certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 06 Oct 2004 15:31:36 -0600
glen herrmannsfeldt writes:
Do you know if the processor is the same as the Micro/370 described by Nick Tredennick in his book on Microprocessor Logic Design? It doesn't seem to have the right number of pins, but otherwise soinds like the right processor. I never see the processor and XT/370 described together, though.

this description of micro/370
http://www3.sk.sympatico.ca/jbayko/cpuAppendA.html

sounds an awful lot like what was on the xt/370 board. this processor was about 100 370 kips (0.1mips).

also in that time-frame .... ibm germany had done the roman 370 chip-set ... which was about the mip rate of 370/168 (3mips)

a little earlier, SLAC had done a bit-slice 370 processor that was about the mip rate of 370/168 (3mips) ... but only had problem state instructions necessary to run fortran. supposedly they were placed at collection points for doing initial data reduction.

some number of vendor clones

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computer industry scenairo before the invention of the PC?
Newsgroups: alt.folklore.computers
Date: Wed, 06 Oct 2004 19:49:34 -0600
"Jack Peacock" writes:
Meanwhile downtown a now defunct realty holding company had their computing center (370 based I think, been a while) proudly displayed on the ground floor of a bank building, in the heart of the Denver financial district. One could stroll by and read the printout from the printers, thoughtfully placed next to the windows. The main boxes were further back, with the analyst desks between the window and mainframe. I suppose it was a statement of the relative expendability of analysts/programmers and the mainframe. The site violated virtually every security policy imaginable, but they never had any problems. The company ran out of money and folded when investors finally figured out the property they held wasn't covering the return on investment. Jack Peacock

summer of 1970 ... i spent a week in denver ... mostly 3rd shift shooting a cp/67 problem on 360/67 that was in a highly visible show-case first floor position in a high rise .... King Resources.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computer industry scenairo before the invention of the PC?
Newsgroups: alt.folklore.computers
Date: Wed, 06 Oct 2004 20:28:02 -0600
... oh and slightly related
https://www.garlic.com/~lynn/99.html#15 Glass Rooms

and even more drift, xerox buying sds to become xds.
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?

somewhat interesting history read
http://www.feb-patrimoine.com/Histoire/english/information_technology/information_technology_3.htm

starting out mentioning that feb. '64, norm rasmussen founded ibm cambridge science center at 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

but also mentions introduction of SDS Sigma 7 in 1966, first delivery of SDS-940 in Apr, 1966, and XDS buys SDS in 1969.

misc more on SDS sigma7 (also referenced in the "newsgroup cliques" posting):
http://www.andrews.edu/~calkins/profess/SDSigma7.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computer industry scenairo before the invention of the PC?
Newsgroups: alt.folklore.computers
Date: Thu, 07 Oct 2004 09:09:47 -0600
ref:
https://www.garlic.com/~lynn/2004m.html#14

actually it wasn't a bug ... it was that king resources wanted to run os with an isam application under cp/67.

the standard os's and the standard 360 architecture used real addresses for i/o. a os running in a cp/67 virtual machine, when it did a virtual SIO ... the virtual SIO simulation routine would call CCWTRANS to make a shadow copy of the (virtual machine's) CCW program (and associated control data), translate all the virtual addresses to real addresses (pin'ing the virtual pages in real storage as it processed the shadow CCWs), and as necessary translate various control data.

the problem was that IMS channel programs could use all sort of self-modifying channel programming .... where the target of a read operation might be some later CCW in the channel program (or possibly some of its control data).

the problem with IMS running under cp/67 was that the target address for such read operations would be translated to a real address in a virtual machine page ... modifying the original virtual ccws ... and not touching the actual executing shadow CCWs.

so i had to come up with a hack to CCWTRANS to recognize IMS channel programs doing self-modifying I/O channel programming ... and retarget stuff to the shadow copy channel program (as opposed to the virtual channel program).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

mainframe and microprocessor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: mainframe and microprocessor
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 07 Oct 2004 13:16:09 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Today's mainframes do indeed use microprocessors in them.

But the software used with mainframe computers has continued to progress somewhat since the early 1980s. For example, IBM updated the architecture used on the System/360 to produce the z/Architecture, a 64-bit version.

Some mainframes use specialized custom microprocessors. Some supercomputers use commercial chips like the Opteron or the Itanium, others use custom chips.

Today, microprocessor technology is just far and away the most cost-efficient way to implement a computer architecture.


total trivial drift ... is that most of the system/360 and system/370 mainframe machines tended to be loads of "microprocessors" that implemented microprogramming to conform to something ... the mainprocessor tended to be microprogramming to implement 360/370 instructions ... and it might or might not have integrated channels ... in which the microprocessor was time-shared between providing 360/370 functionality and channel programming functionality. then there were all sorts of microprocssors for the mainframe control units.

as an undergraduate in the '60s ... i got to work on a project where we reverse engineered the ibm mainframe channel interface and built a channel interface board that went into an interdata/3 minicomputer and programmed the interdata to emulate a mainframe control unit. random refs:
https://www.garlic.com/~lynn/submain.html#360pcm

another interesting example was the transition from 370/158 & 370/168 to 303x machines. The 370/158 had integrated channels ... i.e. the processor engine inside the 158 had microprogramming for both 370 instruction operation and channel i/o operation (and the processor engine was shared between the two sets of microprogramming).

the 303x line had a 3033, a 3032, and a 3031 along with something called a channel director. the 3033 was the 168 wiring diagram remapped from 4-circuit/chip technology to something like 40-circuit/chip technology that was about 20percent faster. somewhere in the 3033 development cycle, work was done on optimizing the logic (in part to better use onchip operations) and the 3033 came out about 50percent faster than the 168.

the 3032 was a 168 repackaged to use channel directors

the 3031 was a 158 repackaged to use a channel director

the 303x channel director was actually a 158 processor engine with just the integrated channel microcode (w/o the 370 microcode) ... and a 3031 was a 158 processor engine with just the 370 microcode (w/o the channel microcode). in some sense a 3031 plus channel director was an smp two-processor machine ... with one processor dedicated to running 370 microcode and essentially a second nearly identical processor engine dedicated to running the integrated channel microcode.

another example is the 370/115 and 370/125. a 115 has a 9 processor position shared memory bus with every processor identical. depending on the 115 features specified, you might have 4-6 processors installed. there would be one processor with the 370 instruction set microcode and the other processors would have various different microcode implementing various kinds of controller and i/o features. a basic 115 processor engine was about 800kips ... and the microcode avg. approx. 10 instructions for every 370 instruction (resulting in the 370 microcode processor deliverying about 80kips). The 125 was identical to a 115 except that the processor that ran the 370 microcode was approx. 1 mip engine ... yielding approximately 100kips 370 (otherwise everything else was identical to a 115).

the 115/125 was also the first machines that required HONE
https://www.garlic.com/~lynn/subtopic.html#hone
applications for the salesman to create an order. HONE was an internal online field and sales support system ... in the late 70s consolidated the major US hone datacenters in cal. which resulted in the largest single-system image operation that I knew of (at least at the time)... however, there were also major (and minor) clones of the HONE system running in numerous other places around the world. The cal. HONE consolidation was within a couple miles of another major online (commercial) time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare
service using the same operating system.

in the early 80s, earthquake and disaster considerations resulted in the US HONE installation being replicated, first with an installation in Dallas and then with a 3rd installation in Boluder. There was load-sharing and fall-over coordinated between the three datacenters.

random other 360/370 mcode posts:
https://www.garlic.com/~lynn/submain.html#mcode

in the late 70s, an effort was started to consolidate all the myriad of corporate microprocessors around 801 riscs. a major project called "fort knox" including focused on replacing the microprocessors implementing 370 machines (w/801s). In theory, the 4341 follow-on would have used an 801 risc-based microprocessor.

I contributed to a document that helped kill the effort ... not because i didn't dislike 801 risc processors .... but because chip technology was starting to advance to the pointer were 370s could be implemented directly in silicon (eliminating the intermediate microprogramming intermediate overhead).

in the early 80s, ibm germany had developed the 370 roman chip-set that had approx. the performance of a 370/168 (i.e. about 3mips). minor recent reference:
https://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM PC software

In 1985, i drafted a series of documents for a somewhat blade-like specification
RMN.DD.001, Jan 22, 1985
RMN.DD.002, Mar 5, 1985
RMN.DD.003, Mar 8, 1985
RMN.DD.004, Apr 16, 1985


using mixture of memory boards, roman chip boards, and 801 risc blue iliad chip boards. the density packing problem was primarily heat related (i was trying to get something like 80 boards packed in standard rack-mount configuration).

drift and a minor reference from 1992
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15

and recent posting with some topic drift with respect to above in a database newsgroup
https://www.garlic.com/~lynn/2004m.html#0
and touched on a posting in another thread in this n.g.
https://www.garlic.com/~lynn/2004m.html#5

and a recent posting about some time-lines
https://www.garlic.com/~lynn/2004m.html#15
from
http://www.feb-patrimoine.com/Histoire/english/information_technology/information_technology_3.htm
... although i might have slight quibles about some of the stuff, it does mention fort knox, future systems, various machine dates, etc.

random other 801/fort knox postings
https://www.garlic.com/~lynn/subtopic.html#801

random other future system postings
https://www.garlic.com/~lynn/submain.html#futuresys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 07 Oct 2004 13:24:27 -0600
Sterling Garwood writes:
I always thought IBM really missed the boat --- they had a nice, widely accepted and supported single user OS (CMS) and a hardware platform (the PC). A port of CMS with some extensions to the Intel 8088 would have changed histroy significantly....

there was a project started to do something like that ... but pretty much fell victim to feature creep ... with lots of different people wanting to take advantage of the reset and restart to do their most favorite thing. at one point before it finally implodded there were 300 people writing specifications. the original proposal was 10-12 people to just do it.

along the way there was various other politics. for a while the small group (on the west coast) had gotten the charter from boca to do software (while boca was purely a hardware effort). there was monthly sanity checks with boca about still letting the west coast have software charter. sometime before product announce ... boca changed its minds and decided that it wanted to actually own both the software and hardware charter (even if it involved outsourcing pieces under their contract).

note that standard cms was (& is) extremely disk and memory intensive compared to the pc standards of the early 80s (although in later years it has been more & more possible to trade-off memory caching with disk intensive operation).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computer industry scenairo before the invention of the PC?
Newsgroups: alt.folklore.computers
Date: Thu, 07 Oct 2004 13:27:08 -0600
glen herrmannsfeldt writes:
It sounds like a big security hole to me. You assume it will do the right modification, but can you be sure?

Then there are the ISAM self modifying channel programs and SET ISAM ON for some VMs. Does it really allow modification to the real channel program?


yes ... but there are all sorts of gotcha's and checking so that you don't violate security and integrity considerations

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Whatever happened to IBM's VM PC software?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whatever happened to IBM's VM PC software?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 07 Oct 2004 14:04:12 -0600
dgriffi writes:
So, what then would have happened when the price for the CMS-PC came down to the point where normal humans could afford them? Another thing that would spur widespread adoption of this machine is availability of games. What sort of games would be possible on a CMS-PC besides the obvious text-adventures?

in the early 80s, the author of rex(x) wrote a multi-player spacewar game for 3270s ... which could use graphics to display various positions (but limited to keyboard for input). it transparently used local special message as well as networked special message ... so the game players could be different "CMS" on the same physical machine or different machines.

there was a problem that each individual player was implemented in their own cms virtual machine ... so you found a couple people writing automated players ... that could re-act and respond much faster than normal humans. the game ecology was subsequently modified that energy use increased non-linear with decrease in reaction time below a certain threshold. it didn't eliminate the automated players ... but slightly leveled the playing field ... at least as far as response time issue was concerned.

better graphics became possible with 3279 and loading program symbols.

the issue wasn't that standard cms was precluded taking advantage of various human factor features ... it just was that programs tended to be written to work with the features that were available.

slightly related post about former vm/cms la-area systems engineer implementing cms editor and script for trs80
https://www.garlic.com/~lynn/2004l.html#74 Specifying all biz rules in relational data

minor past posts about multi-user spacewar
https://www.garlic.com/~lynn/2001j.html#26 Help needed on conversion from VM to OS390
https://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

computer industry scenairo before the invention of the PC?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computer industry scenairo before the invention of the PC?
Newsgroups: alt.folklore.computers
Date: Thu, 07 Oct 2004 14:13:23 -0600
glen herrmannsfeldt writes:
So does it end the channel program at that point, go into CP to check that the right things were done, and then restart it?

isam typically didn't ... it typically read seek arguments to dynamically modify operation rather than reading replacements for actual ccws. the integrity issue for seek arguments was whether one virtual machine could seek outside the area on a disk that it was entitled to. the solution was to limit isam-on to full-pack and/or attached disks ... i.e. the virtual machine was entitled to the full pack ... so there wasn't an integrity issue regardless of what the virtual machine happened to introduce for seek argument.

later tcam/vtam would have program modified CCWS ... as opposed to I/O modified CCWS. this is where certain CCWS had the PCI-flag turned on ... which would generate a hardware interrupt ... and allow the processor to change the CCW program "on the fly" .... CCW execution was defined as being purely synchronous ... so if the processor handled the PCI interrupt and modified the CCW program before the channel got to that point ... then things went one way as opposed to another way. A special virtual machine diagnose was introduced for tcam/vtam when they had modified a running channel program ... to indicate to cp kernel to retranslate the ccw sequence.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Lock-free algorithms

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Lock-free algorithms
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 08 Oct 2004 10:25:33 -0600
Eric writes:
No I am not sure which is why I was being tentative in my wording. If it uses per-CPU free lists then that could explain the result. (I thought Linux used a buddy heap but maybe that was long ago.)

How one deals with that depends on how you perceive the likelihood of hitting the free list in real life, and possibly compensating.

I am pretty sure it would use a spinlock to guard real heap allocations. If the heap retained just the last freed object in order to optimize free-alloc sequences then I would likely arrange the test to avoid that optimization. You could compensate by allocating 2 objects and freeing the second immediately. This would force it to perform a real allocation as it most likely would in real life and thereby invoke the heap lock.

The results would show differences between the tests using pre-allocated objects (the optimistic test) and heap allocated (the pessimistic test).


cp/67 started out with free storage management where blocks on list sorted by storage addresses ... and when storage locations were given back ... it would be possible to merge adjacent blocks into larger contiguous storage. this list of available elements could grow to large hundreds of elemenats .... and searching for best-fit allocation and sorted re-adding was sometimes hitting 20-30percent of kernel cpu time ... especially after significant efforts had been done to optimize many other pathlengths and even with using search list hardware instruction.

lincoln labs had defined a special add-on instruction for the 360/67 called search list .... that could run a list as a single instruction. cp/67 kernel storage management used this instruction by default ... and had special missing instruction emulation of the search list instruction if cp/67 was running on a 360/67 w/o the instruction installed. while the search list instruction saved the instruction loop decode stuff ... there were still the internal loop with all the storage fetches and compares.

free storage was enhanced to support subpools in the early 70s. storage requests sizes in certain ranges were rounded to standard sizes and attempt was to satisfy the request from the corresponding subpool size (pulling top element off the list); if not, the standard list was searched. when the storage was made available, if it was a subpool size, it was returned to a push-down/lifo subpool chain. under typical operations the majority of all kernel storage request activity was managed thru the subpool mechanism ... dropping the related kernel cpu overhead to a couple percent (rather than 20-30 percent).

w/o doing the trace table entry, the total module pathlength was something like 14 instructions to pull something off push/pop subpool entry (save regs, adjust size, index subpool headers, remove entry, restore regs, return). it was also an easy operation for compare&swap. there were misc. administrative stuff that garbaged collected the subpools and returned them to standard storage list.

this implementation was enhanced in the 3081 time-frame for smp cache thrashing so that storage allocation was rounded to multiple of cache line size and aligned on cache line boundary.

ref B. Margolin, et al, Analysis of Free-Storage Algorithms, IBM System Journal 10, pgs 283-304, 1971

and for a little drift, this system journal article summary
http://www.sigmod.org/sigmod/dblp/db/journals/ibmsj/ibmsj10.html

also mentions an article in the previous edition about program restructuring for virtual memory (by hatfield and gerald). this work at the science center eventually resulted in the release of the vs/repack semi-automated program restructuring product.

free-storage algorithm article
http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066919c/7c6d60f9060c7e7a85256bfa00685a7c?OpenDocument

hatfield & gerald's article (also from 1971)
http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066919c/9260d68c69f3965d85256bfa00685a76?OpenDocument

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Help! I'm trying to understand PKI - especially CA's role

Refed: **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! I'm trying to understand PKI - especially CA's role
Newsgroups: comp.security.misc
Date: Fri, 08 Oct 2004 11:12:40 -0600
Wimbo <wimbo_online@_REMOVETHIS_hotmail.com> writes:
The CA verifies the credentials and signs the public key with the private key of the CA and sends it back to Alice. (The verification credentials depend on the requested certificate class. Class 1 certificates are only validated by e.g. a valid credit card number. The higher the class, the more personal it gets. With a class 3 certificate the CA knows for sure that you are the person you say you are.)

certificates were originally targeted at the offline email scenario from the early '80s to handle a situation where a recipient was receiving email from somebody which they had no prior contact and/or knowledge. the scenario was that the recipient dialed their local "post office", downloaded email, hung-up, and then needed to authenticate email from total strangers w/o resorting to any additional operations.

the typical business scenario has been that parties establish some prior interaction and maintain local information about their interacting parties. this information ... including possibly a public key is stored and managed by the recipient (aka the PGP model). this has been expanded with the pervasive spread of online infrastructure so that a recipient can either have access to the information (about the sender) locally or remotely.

in the (offline) certificate model ... the local recipient's authentication table instead of having information about the sender (and their public key), it contains information (and public key) about a certification authority.

the sender of the offline communication packages the message, their digital signature and a certificate issued by a certification authority ... containing various information about the sender (including the public key) and sends it off. the recipient then uses their authentication table to validate the certificate (i.e. the certification authority's public key) ... and then they use the information contained in the certificate (supplied by the certification authority) about the sender to validate the actual message (in lieu of having their own information about the sender).

in the early 90s, you found x.509 identity certificates that were starting to have more and more information about the sender. in the mid-90s, there was starting to be some realization that spraying identity certificates all of the world with lots of personal information could represent signfiicant liability and privacy issues. this somewhat gave rise to relying-party-only certificates ... containing nothing more than possibly a sender's account number of some sort (like a payment card number) and the sender's public key.

1) the relying party registers the senders/applicants information and public key

2) the relying party creates a relying-party-only certificate containing an account number and a public key and sends a copy to the applicant.

3) for some subsequent operation, the application creates a message, digitally signs it, and packages the message, the digital signature and the certificate and transmits it to the relying party.

4) the relying party receives the message, extracts the account number from the message, retrieves the sender's public key from the account record, uses the public key to validate the sender's digial signature and accepts the message

the above four steps are typical online business operation using relying-party-only certificates that were appearing in the mid-90s. however, it is trivial to see from step 4, that the relying-party-only certificates in online operations are redundant and superfluous since the relying party needs to never reference the certificate (having direct access to the information in the account record)

the other thing of note from various of the mid-90s operations involving financial transactions and relying-party-only certificates, was that the typical payment card transaction is on the order of 60-80 bytes and the relying-party-only certificates could be on the order of 4k to 12k bytes.

the issue was that the recipient/relying-party had the original of all the information that might be contained in a relying-party-only certificate (or otherwise had access to it using an online environment). the sender might possibly generate hundreds of financial transactions, appending the redundant and superfluous relying-party-only certificate to each one and send it on its way to the relying-party (which has the original and superset of all possible information contained in the certificate). Since the relying-party-only certificate is otherwise redundant and superfluous, the possible remaining purpose for having a relying-party-only certificate is to cause enormous payload bloat (by a factor of 100 hundred times) in the transaction traffic.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Spells Out Mainframe Strategy

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Spells Out Mainframe Strategy
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 08 Oct 2004 13:24:50 -0600
edgould@ibm-main.lst (Ed Gould) writes:
Those capabilities include IBM's Geographically Dispersed Parallel Sysplex (GDPS), a clustering technology for dynamically managing and mirroring critical storage, processor and network resources and for extending the technology to the Linux environment. Ultimately, IBM wants to make it possible for other platforms to use GDPS capabilities.

a early pregenitor of geographically dispersed parallel sysplex (gdps) would be HONE ...
https://www.garlic.com/~lynn/subtopic.html#hone

when the us hone datacenters were all consolidated in cal. for the largest single-system loosely-coupled cluster (at the time) and then replicated first in dallas and then in boulder ... with load-sharing and fall-over between the centers. there were also numerous HONE clones dispersed all over the world ... but they were clone operations as opposed to integrated cluster operation.

another pregenitor was the Peer-Coupled Shared Data that my wife did when she served her stint in POK in charge of loosely-coupled architecture.
https://www.garlic.com/~lynn/submain.html#shareddata

later as part of doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we coined the terms disaster survivability and geographic survivability (i.e. various forms of dispersed continuous operation) to distinguish from traditional disaster recovery
https://www.garlic.com/~lynn/submain.html#available

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sat, 09 Oct 2004 11:49:47 -0600
"Tom Linden" writes:
It is partly the fault of the implementation language, had they chosen a language with higher level of abstraction and capability, like PL/I they would not have had those problems.

it isn't quite as much the abstraction as the implementation ... c string libraries tend to hide lengths and using implicit, null terminated (variable length) strings. pli implementations tend to keep real lengths associated with buffers and strings (and catches things like incorrect/overrun lengths).

the old study of multics (implemented in pli) claimed that there were never any observed buffer related failures.

note that even if you were using assembler ... in a predominately pli environment ... the same buffer length paradigm conventions would tend to be followed in the assembler code ... and therefor the assembler code would be as unlikely to suffer buffer overflow as the pli code (aka ... one could claim that it is the standard pli buffer conventions ... as opposed to the actual language itself that makes it much more resistant to such vulnerabilities).

lots of vulnerability, exploit, etc posts:
https://www.garlic.com/~lynn/subintegrity.html#fraud

misc. past posts mentioning multics security
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#8 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#10 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#58 The next big things that weren't
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2002p.html#37 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003i.html#59 grey-haired assembler programmers (Ritchie's C)
https://www.garlic.com/~lynn/2003j.html#4 A Dark Day
https://www.garlic.com/~lynn/2003j.html#38 Virtual Cleaning Cartridge
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#1 Password / access rights check
https://www.garlic.com/~lynn/2003o.html#5 perfomance vs. key size
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2004b.html#51 Using Old OS for Security
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004e.html#27 NSF interest in Multics security
https://www.garlic.com/~lynn/2004e.html#36 NSF interest in Multics security
https://www.garlic.com/~lynn/2004f.html#20 Why does Windows allow Worms?
https://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004j.html#29 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#21 "Perfect" or "Provable" security both crypto and non-crypto?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sat, 09 Oct 2004 23:00:14 -0600
"Tom Linden" writes:
Lynn, I don't disagree with you, but I would add that the way people tended to write PL/I, particularly using ON-conditons to handle aberrant behaviour made for a different discipline, and one that would have been more difficult to emulate with C code, which after all, is kin to a macro assembler.

however, cp/67 and vm/370 were all assembler ... no on-conditions ... but used very similar buffer constructions (to pli) with explicit max. lengths and current lengths. one might even claim that pli may have inherented some of the buffer implementation characteristics from underlying infrastructures. in any case cp/67 and vm/370 had little or none of those kinds of buffer overflows.

in the past, i've referenced one scenario that i know that it had happened. as undergraduate i had done ascii/tty support for cp/67 ... and it had been incorporated and shipped in the standard product. in part because ttys were limited to 80 bytes, i had used one byte arithmetic for calculating length operations.

somewhere along the line, somebody added a new kind of tty device (plotter, display screen, something?) which had something possibly like 1000 byte lengths (in any case more than 256). various places were modified to accommodate the longer lengths ... except for the one byte arithmetic. the claim is that the system crashed something like 27 times in a single day. check somewhere on this page
http://www.multicians.org/thvv/360-67.html

in the very early 80s, i had done a programmable problem analysis and determination tool for vm & cms. it was eventually in use by the majority of the internal installations and supposedly by nearly all PSR (support people that worked on customer reported problems).

I collected failure types ... and added intelligent diagnostic code looking for signatures and analysis of the common failure modes. buffer lengths weren't a significant issue ... common failures were pointer problems ... wierd paths that failed to establish correct pointer values in registers, dangling pointers ... aka synchronization problems where delayed operations used storage locations that had been released. random ... misc. posts referenced problem determination. zombies, dump readers, etc
https://www.garlic.com/~lynn/submain.html#dumprx

when we started ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp
we did a detailed vulnerability analysis ... and one of the results was prediction that c-based infrastructures were going to have something like two orders of magnitude (one hundred times) greater buller length related failures than other infrastructures that we had been familiar with.

another minor ha/cmp reference
https://www.garlic.com/~lynn/95.html#13

some topic drift, during ha/cmp we also coined the terms disaster survivability and geographic survivability to differentiate from disaster recovery
https://www.garlic.com/~lynn/submain.html#available

some additional topic drift ... i've repeatedly stated that the internal network (somewhat related to number of internal machines) was larger than arpanet from just about the beginning until sometime mid-85 ... in part because the internal nodes effectively contained a form of gateway function from the start .... which arpanet/internet didn't get until the big cutover to internetworking protocol on 1/1/83. some specific posts about the internal network
https://www.garlic.com/~lynn/internet.htm#0
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/99.html#112

misc. collected posts about the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some of this same internal network technology was also used for bitnet and earn. specific reference to earn
https://www.garlic.com/~lynn/2001h.html#65

misc. collected posts about bitnet and earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

random other posts on vulnerabilities, exploits, risk, etc
https://www.garlic.com/~lynn/subintegrity.html#fraud

random specific posts on the buffer overflow issue
https://www.garlic.com/~lynn/subintegrity.html#overflow

and assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 10 Oct 2004 13:15:10 -0600
"David Wade" writes:
I have seen a buffer over run issue in VM/SP. It was in the SNA terminal support. If you read beyond the end of your buffer you could access any ones terminal buffers, It took us ages to get IBM to fix it, and as we wanted to connect the machines to a semi public X.25 network we were help up for about 6 months.

As the say what goes round comes round ...


we didn't say it couldn't happen ... we just predicated that the frequency in C environment would be two orders magnitude higher; i also gave tty terminal example from cp/67.

i could also make comments about SNA not being system, not being network, and not being architecture

as an aside, my wife served a stint as chief architect of amadeus ... where she was spec'ing a lot of x.25 ... which generated some tension with the sna forces which got her replaced. it didn't do any good, amadeus went with x.25 anyway; aka there were/are three major res systems, aa's, united's and the "european" (amadeus).

when we were also doing HSDT and building our own high speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

minor reference
https://www.garlic.com/~lynn/internet.htm#0

there was this definition from somebody in the sna group
https://www.garlic.com/~lynn/94.html#33b

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 10 Oct 2004 13:29:23 -0600
aka ... quantity can make some difference .... there is some report that traffic related fatalities this year are climbing back up close to 50,000/annum; it would be significant if that was 100 times larger, aka 5mil traffic related fatalities per annum instead of just 50k/annum.

there is also the analogy to security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

there would be big different between having say ten vulnerability events per annum at say aggregate of $1m/event (or $10m/annum aggregate) ... and having hundred times as many or thousand vulnerability events per annum (at $1m aggregate/event) for say $1b/annum (even worse, hundred thousand vulnerability events per annum for say $100b/annum).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Sun, 10 Oct 2004 15:29:22 -0600
Brian Inglis writes:
... and the programmers would have to think about security!

and/or at least integrity and assurance.

note that there has been the capability gnosis/keykos/eros genre of systems

misc. from search engine on gnosis, keykos, eros

capability based system reference:
http://www.zilium.de/joerg/capbib.html

misc other gnosis/keykos/eros
http://cap-lore.com/CapTheory/upenn/
http://citeseer.ist.psu.edu/context/1082998/0
http://www.eros-os.org/faq/secure.html
http://www.eros-os.org/design-notes/KernelStacks.html
http://portal.acm.org/citation.cfm?id=319344.319163&coll=GUIDE&dl=ACM

misc. past postings mentioning gnosis/keykos
https://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel needed
https://www.garlic.com/~lynn/aadsm17.htm#31 Payment system and security conference
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003h.html#41 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone
https://www.garlic.com/~lynn/2003j.html#20 A Dark Day
https://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#22 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#26 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#54 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004e.html#27 NSF interest in Multics security

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 11 Oct 2004 08:08:02 -0600
Morten Reistad writes:
The "Computer Industry" never learned this. Even IBM always had severe problems with software quality; and noone can blame them for not trying.

Perhaps this fact about Computer companies being unable to develop good core software is something like honest car salesmen; a treat the business as a whole seems never to accomplish.


some of it was possibly (people) continuity (fast growing with lots of new hires and promotions) as well as costs.

standard process during the 70s and 80s were monthly accumlative (source & binary) update "tapes" called PLCs. regression tests were relatively nominal (typically functionally & component) and rarely involved extensive system level tests.

when i did the resource manager ... there was 2000 calibration and verification benchmarks that took 3 months elapsed time for initial reelease. they wanted me to re-intergrate with each base-system PLC and ship resource manager PLC every month. I said that i would have to run a minimum of a 100 calibration and verification benchmarks for each such PLC ... and I didn't have the resources to do that more than once every three months.

while the calibration and verification benchmarks were primary performance and capacity planning oriented ... there were typically also some number of severe stress tests. when I started the work for the initial release ... the stress tests were guaranteed to crash the system (unmodified system, modified system, system with resource manager added, etc). as part of preparing the resource manager, i had to redesign and reimplement the kernel serialization operations from scratch (to eliminate all situations of both serialization-related failures as well as all zombie/hung processes).

i didn't like having system failures as well as loosing files/data. when i joined the science center, the standard production cp67 build process for the floor system was to generate a binary image that included a (bootable) binary copy on tape. In part because that binary copy occupied so little of the tape, i added to the tape, all the source and build files needed to recreate that specific system "build".

years later when Melinda
http://www.leeandmelindavarian.com/Melinda#VMHist

was looking for some historical stuff, i still had one of these old 1600 bpi system tapes ... that included the original exec procedures for multi-level source maintenance.

random past posts on multi-level source update
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day
https://www.garlic.com/~lynn/2003j.html#45 Hand cranking telephones
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 11 Oct 2004 08:22:03 -0600
"Tom Linden" writes:
It is certainly possible to write 'safe' code with C, it is just harder any you need to reinvent the wheel each time. PL/I provides both bounds and stringerange checking, and with ON conditions for error recovery. The cost in cpu consumtion is in my view insignificant in comparsion to the added functionality. You might say, that avoiding it is a foolish form of optimization. Most organizations have (used to?) procedures manuals for coding practices, and it should include such things. Ultimately, you get what you pay for.

but as part of the basic infrastructure ... buffers tended to have header fields with maximum buffer length and current occupancy length. string operations utilized the header fields as part of normal operation .... w/o even turning on additional checking.

you could still blow things by individual subscripting (which could be caught by turning on checking) ... but gobs of the infrastructure used the explicit length fields as part of their standard operation. you had to work harder with explicit code to do buffer overruns.

the underlying systems (like the os/360 and follow-ons) tended to have explicit field lengths permeating their infrastructure ... which tended to require a programmer to work much harder to generate buggy code with respect to buffer length operations; it wasn't that a programmer couldn't violate buffer lengths ... but with default paradigm of having explicit buffer length information all over the place ... a programmer tended to have to work much harder to come up with bad length code.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

MS Corporate. Memory Loss is Corporrate Policy ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MS Corporate. Memory Loss is Corporrate Policy ?
Newsgroups: alt.folklore.computers
Date: Mon, 11 Oct 2004 11:50:57 -0600
Peter Flass writes:
It's interesting that IBM invented STAIRS (their text-search program) in response to their anti-trust lawsuit. The first thing the legal staff did was to input all the documents that at that time were hardcopy only, and then make them searchable. Remember, this was the days before e-mail. M$ seems to "think different".

some parts had email in the early 70s as well as the start of the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... i was once on a business trip in the early 70s to paris and had to do quite a bit of fiddling to read my email back in the states.

part of an interesting history series that i've referenced before
http://www.infotoday.com/searcher/jul04/ardito_bjorner.shtml

some random stairs reference (from search engine
http://www-306.ibm.com/software/data/sm370/about.html
http://www.mnemonic.com/mims/dabney86.html

... and
http://www.findarticles.com/p/articles/mi_m0SMG/is_n2_v8/ai_6289628
... from above
IBM was actually the first in the field when Stairs was introduced in 1974, but IBM has not significantly enhanced the package recently. "Unfortunately, now that the market is taking off, Stairs is at the end of its life cycle," McCarthy notes. However, he says IBM is rewriting Stairs to layer on top of DB2 and SQL/DS.

....

as to the litigation ... i have some vague recollection of somebody telling me of one of the motels near the san jose airport being nearly completely occupied by lawyers working on the case for extended periods of time (months? years?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 11 Oct 2004 14:49:59 -0600
"David Wade" writes:
Ah such fond memories. The bug was in a DMK module which I think is pretty basic OS......

there were lots of things that could cause CP/DMK failures ... that is one reason that i did programmable problem determination and analysis tool ... that had a bunch of stuff to automagically look for signatures/characterists of the most common failures
https://www.garlic.com/~lynn/submain.html#dumprx

another project that I did in the early to mid '80s was to take the complete CP/DMK spooling support and re-implement 95% of the function in vs/pascal running in a virtual address space. the CP/DMK spooling function was fairly large amount of assembler code running in privilege kernel state and failures in the code would bring down the whole system. also, errors on spooling disks could directly or indirectly affect overall system availability.

my objective was to one 1) implement in higher level language, 2) partition and isolate feature/function in separate address space, 3) increase system availability (isolating nearly all components of the spooling operation that might affect system availability), 4) significantly increase spooling thruput and performance, and 5) make it much simpler to add new feature/function.

trivial example was possibility for the system to continue running when the spool file system address space was not running and/or none of the spool file disk drives were available (or offline).

random past postings on the spool file system demonstration rewrite:
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#25 mainframe question
https://www.garlic.com/~lynn/2002k.html#25 miscompares per read error
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

tracking 64bit storage

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: tracking 64bit storage
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 12 Oct 2004 16:40:59 -0600
tedmacneil@bell.blackberry.net (Ted MacNEIL) writes:
C will not only let you shoot yourself in the foot, it will drive you to the store fo ammo, load the gun, and then triple-dog-dare you to go ahead and do it. ..

C is being mis-used. It was always intended to be a language for writing systems software. NOT application code. As such, it is just two steps above assembler code. Which also dares you to shoot.

Its predecessers (BCPL & B) were just one step above (no typing), and were intended for the same thing. C has 'loose' typing, which can be over-ridden, or ignored, in some cases.

The language was always intended for sysprog-types, NOT the great unwashed that couldn't programme their way out of a rectangular container consisting of refined wood pulp.


there has been a thread running with respect to buffer overflow exploits in a.f.c.

we did detailed vulnerability analysis when we were starting ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and predicted that buffer related problems would be two orders of magnitude higher for c-implemented applications than what we were seeing in other environments.

a big part of this was that there was sufficient length obfuscation in the libraries ... that it would be much easy for people to shoot themselves in the foot ... not so much a language issue but whole implementation infrastructure.

the counter-example is multics, implemented in pli and it is claimed to not have had any buffer overflow problems.

collection of buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#overflow

general collection of vulnerabilities, exploits, fraud, etc
https://www.garlic.com/~lynn/subintegrity.html#fraud

part of the prediction about C and buffer overflows was having done a programmable problem analysis and determination tool in the early 80s
https://www.garlic.com/~lynn/submain.html#dumprx

that became fairly widely used. I collected a lot of information about typical failures and implemented automagic examination looking for various kinds of common failure signatures.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Wed, 13 Oct 2004 08:02:40 -0600
Anne & Lynn Wheeler writes:
there were lots of things that could cause CP/DMK failures ... that is one reason that i did programmable problem determination and analysis tool ... that had a bunch of stuff to automagically look for signatures/characterists of the most common failures
https://www.garlic.com/~lynn/submain.html#dumprx


ref:
https://www.garlic.com/~lynn/2004m.html#33

some ten years earlier, i had written a PLI code to analyze assembler code. it attempted to create a representation of each machine instruction, build code blocks and logic flow. one of the things was to emit psuedo high-level code with if/then/else, do/while, etc logic flow structures.

one of the issues was that supposedly go-tos (machine branches) were bad and if/then/else genre was good. one of the issues was that several kernel assembler modules dealt with state information and been highly optimized to minimize pathlength. there were some relatively modest assembler kernel routines that would go well over ten-level nested if/then/else structures ... this was even with multiple state tests collapsed into single if statement. the result was that the original assembler test/branch logic appeared to be more understandable than the if/then/else/case/do/while/etc representation; it was the more application-like code that appeared to morph better into high level logic. i finally put in a nesting-limit for psuedo code construction ... and a limit of 5-6 levels deep seemed to be a reasonable approx. value for understanding the code.

the other issue was that a large number of kernel failures involved arriving at some code point w/o having loaded a value (like address pointer) into some register. the failure forensic effort involved attempting to reconstruct the original execution-flow thru all the test/branch possibilities ... to determine the specific anomolous execution thread that failed to load a register value.

one of the things that i put in the PLI code was to build a "code block" (sequential code that had neither interior branch into or out off) which summarized registers used and registers change. a specific execution execution thread then became a specific sequence of code blocks ... then it was possible to backtrack a specific execution thread to see if it involved at least one code block that set a specific register value. then the exercise was to follow all possible execution threads (sequences of code blocks) to see if there were any possible execution threads that failed to load a register that was used later by some code block in the same execution thread.

random past posts about "assembler processing" pli program:
https://www.garlic.com/~lynn/94.html#12 360 "OS" & "TSS" assemblers
https://www.garlic.com/~lynn/2000c.html#41 Domainatrix - the final word
https://www.garlic.com/~lynn/2000d.html#36 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2002j.html#8 "Clean" CISC (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2004d.html#21 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004k.html#36 Vintage computers are better than modern crap !

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 13 Oct 2004 10:21:14 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
The TOD clock was there from the beginning of the System/370, if I recall. On the early models, STCK wasn't always as fast as it should have been, but it always had a reasonable specification (much like gettimeofday, with an unspecified epoch).

But, up until at least MVS/XA, most of the system used the much worse System/360 clock design, so many of the benefits were lost in the conversion, and reading the clock comparator and the CPU timer required privilege for no reason that I can see.

This meant that you could read a precise real time but only an approximate CPU time, and you had to poke at control blocks or check timestamps to investigate the end of time slices.


standard 360 had location 80 timer in memory ... that was updated somewhat like every 3.3mils. most software used it for approx. time-of-day. it was 32 bit value ... with a period of about 15 hrs and whatever bit that represented 3.3mils was incremented

360/67 had high-resolution location 80 timer in memory ... same format and period as regular 360 time, except the low-bit was update ... giving it about 13-something microsecond resoulition.

cp/67 used it for time-of-day approximation, time-slice control of tasks, and accounting for both "supervisor" time in the kernel on behalf of some application as well as the "problem" time running application.

slight drift ... when i was undergraduate, i was on project that reverse engineering the ibm channel interface and built a channel board that went into an interdate3, and programmed the interdata3 to emulate a mainframe control unit (somebody wrote an article blaming us for helping originate the plug-compatible controller business).
https://www.garlic.com/~lynn/submain.html#360pcm

one of the first bugs was "red-lighting" the 360. the memory bus was shared between processor, channels, and (at least the) time update function. if a second memory update by the timer came around while a previous timer storage update (of location 80) was still pending ... it would hang the machine and "red-light"

with updates every 13 pt something microseconds, it was much more of a problem than with updates every 3.3 milliseconds; past post
https://www.garlic.com/~lynn/2003n.html#51
.. regular loc. 80 has bit 23 tic'ing 300 times per second and high resolution timer feature has bit 31 tic'ing 300*256 times per second (and 370, 64bit register clock has specification that it operates as if bit 51 tics every microsecond, actual resolution is processor dependent).

in the morph from cp/67 to vm/370 ... the fine grain time-slicing and accounting stuff was converted to the new 370 clock stuff .... in part because the high-resolution timer feature was dropped in 370 (on all machines) ... only the low resolution 3.3 mils update was available .... which was way (, way, way) too course grain for all the accounting and time-slice suff that vm was doing (and in later machines the timer update of storage was eliminated algother).

most of the other software systems on 370 were never really designed to have either fine-grain time-slice control and/or even really very accurate timing for accounting and resource management.

random drift ... i had done the original fair share scheduler as an undergraduate on cp/67
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch
Date: Wed, 13 Oct 2004 10:48:45 -0600
Joe Seigh writes:
I still think moving the epoch is a rather stupid idea. The unix epoch is currently at 00:00:22 GMT Jan 1, 1970. I know, "as if" leap seconds don't exist. We'll just pretend that parts of reality don't exist and those 22 points in time that can't be expressed in time_t format don't exist. They never happened.

some of us got dragged into looking at architecture of the 64-bit 370 timer. i have vague memories of spending something like 3 months on the effort.

supposedly time was defined as the number of seconds since the start of the century. with bit 51 being defined to work as if it tic'ed every microsecond ... making bit 31 tic every 1024/1000 second (i.e. slightly less than once a second). the actual implementation varied across different machines ... although there was an architecture definition that two consecutive instructions referencing clock value would not see the same value (the actual resolution could be somewhat proportional to the machine speed).

there was long discussion about whether the start of the century was 1900 or 1901. there was also long detailed discussions (speculation?) about what to do with leap seconds ... since they couldn't be predicted.

with bit 31 being 1024/1000 of a second ... it gives the timer an epoch of some 130 odd years ... but starting at 1900 means that it comes up in another 30 years or so.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 13 Oct 2004 11:25:08 -0600
glen herrmannsfeldt writes:
I thought there was a special register to keep timer updates from getting lost for some number of ticks. I am not sure of how many, though.

what i remember was that (on the 360/67 with high resolution timer) the timer could hold a single pending tic for storage update ... and if the channel held the storage bus for two tics ... which could just be slightly longer a tic interval (if channel happened to get the storage bus just prior to a tic ... and kept it until the next tic) ... the machine "red-lighted". as a result the controller card (in the interdata3) that interfaced to one of the channels ... had to have the logic to release channel bus interface so the channel could release the storage bus interface (at appropriate intervals). since that was much faster than the interdata3 could directly process ... it all had to be handled in the hardware of the channel interface card.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 13 Oct 2004 11:27:08 -0600
Anne & Lynn Wheeler writes:
what i remember was that (on the 360/67 with high resolution timer)

on the ohter hand, it was over 35 years ago

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Result of STCK instruction - GMT or local?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Result of STCK instruction - GMT or local?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 13 Oct 2004 14:21:04 -0600
john__krew@ibm-main.lst (John Krew) writes:
Just recently our system programmer decided to differentiate between local and GMT time on our 7060-P30 "mainframe" (development computer). Until then, local and GMT were identical (even though we are not in the GMT time zone).

I have noticed that the result of a STCK instruction returns GMT, not local time. One the other hand, dates returned by vendor software (for example, CICS, or the TSO TIME command), return local time. Is there some way to configure the system so that STCK would return local time, or is this an impossiblity?


remember the 370 64-bit hardware tod clock is defined as the time since the start of the (previous) century GMT (modulo whether the century started in 1900 or 1901 and a small matter of sporadic leap seconds). time is defined to appear as if bit 51 tics every microsecond (processors could actually implement higher or lower resolution) ... which made bit 31 tic every 1024/1000 seconds (high word was approximately count of seconds).

so if the clock was set correctly for 14:00 10/13/2004 gmt ... it would be the number of seconds since the start of the previous century and you would then add/subtract the time adjustment from gmt for local time.

some places fake it out .... setting the local time to zero adjustment from gmt ... and then setting the hardware tod clock (with the elapsed time since the start of the previous century gmt).

for applications, part of the issue is that stck is untrapped, problem state instruction that just stores the hardware clock value (aka application program gets whatever value stored that is in the hardware clock).

there is then frequently some program fiddling that takes the 64bit hardware clock value and typically converts it into something somewhat more human readable. the hardware clock will be whatever value that it is (base gmt or base local time) and the fiddling will have to be in any code conversion. the problem gets worse if there is some system stuff fiddled that has to display zero displacement GMT (to take out the hardware clock conversion fiddling) and also indicate some sort of valid non-GMT time (say like EST and EDT and automagically handle the switching from dayling savings and standard time).

slightly related stck posts from thread today in comp.arch
https://www.garlic.com/~lynn/2004m.html#36 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#37 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#38 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#39 Multi-processor timing issue

possibly more than you ever want to know about the hardware tod clock
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1?SHELF=EZ2HW125&DT=19970613131822

subtopics: 4.6.1.1 Format
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.1?SHELF=EZ2HW125&DT=19970613131822&CASE=

4.6.1.2 states
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.2?SHELF=EZ2HW125&DT=19970613131822&CASE=

4.6.1.3 changes in clock state
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.3?SHELF=EZ2HW125&DT=19970613131822&CASE=

4.6.1.4 setting and inspecting the clock
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.4?SHELF=EZ2HW125&DT=19970613131822&CASE=

and then there is the store clock (stck) instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

EAL5

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EAL5
Newsgroups: bit.listserv.ibm-main
Date: Wed, 13 Oct 2004 14:51:50 -0600
tedmacneil@bell.blackberry.net (Ted MacNEIL) writes:
Maybe I'm out to lunch. But, if PR/SM meets Common Criteria Evaluation Assurance Level 5, then what happens to Internal Coupling Links, HIPERSOCKET, & EMIF? These all depend on LPARs sharing data!

take the protection profile and the security target .... the evaluated environment doesn't have to include networking facilities ... it could just be some bare bones hardware w/o any of the typical software that could be run on that hardware.

i looked at trying to get an EAL5-high evaluation for a chip.

The base chip had ROM that had been evaluated at EAL5-high ... and it was typically used in configuration where some crypto application was loaded into EEPROM and executed.

I added the crypto to the ROM/silicon of the manufactured chip and eliminated the capability from the ROM that applications could be (dynamically) loaded into EEPROM and executed. The problem is that I have had a hard time getting a protection profile for EAL5-high for the crypto.

The target environments of the two chips are typically the same ... and one would think that the static ROM-only execution would provide higher security and assurance than (the same) chip allowing (dynamic) loading and executing of code in EEPROM.

However, with the crypto in the static ROM/silicon ... the crypto has to be part of the target evaluation (and try getting more than EAL4-high evaulation for crypto). The chip that supported loadable EEPROM ... could be evaluated w/o anything in the EEPROM and so the security target didn't have to include any actual useful code as part of the evaluation.

In any case, I have a chip that the only execution allowed is firmly set in ROM/silicon ... which is nearly identical to a chip that allows loading and execution of code in EEPROM; the chip with absolutely fixed execution is lower assurance evaluation level than the nearly identical chip that supports loading and execution of code.

lots of random postings related to assurance:
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Auditors and systems programmers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Auditors and systems programmers
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 14 Oct 2004 08:46:40 -0600
Michael.CAIRNS@ibm-main.lst (CAIRNS,Michael) writes:
Cross posted to RACF_L.

I'm compiling a list of the things that auditors have insisted to my management must be implemented/changed about RACF and MVS/S390/zOS/etc over the years that were a) totally incorrect or b) indicated a fundamental lack of comprehension of the platform capabilities.

I promise to make public the results (if any), at least to this and other mainframe forums. I hope it may become a useful resource for future zOS auditors.

To that end - I will be most interested to see submissions/war stories/etc contributed by members of this list. Please mail me offline unless you feel the need to share...

Thanks in anticipation! :-)


somewhat related:
https://www.garlic.com/~lynn/2004l.html#61 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 14 Oct 2004 11:55:34 -0600
hack@watson.ibm.com (Michel Hack) writes:
Lynn misremembers. Bit 31 (low-order bit of top word of 64-bit Dword TOD) represents 2^20 microseconds (bit 51 is the microsecond bit; the effective frequency of the 64-bit clock is 4.096GHz) -- that's about 1.05s. So the period of the clock is about 142.8 years, and it will roll over shortly before midnight at the end of 17 Sept 2042.

... for that post, i hadn't remembered at all ... i just did quicky estimate of number of years based on 4billion seconds ... aka roughly bit 31 is slightly more than a second and 2*32 is somewhat more than 4billion, 4b/3600/24/365 came out 125.839 some years (and i knew it was more than that)

so if i had used a calculator, 2**52/1000000/3600/24/365 gives 142.8

however from
https://www.garlic.com/~lynn/2004m.html#37 multi-processor timing issue

i did say that bit 51 tics once a microsecond ... and bit31 tics slightly less than once a second ... about 1024/1000 ... there is lots of code in the vm kernel that would multiple the high word by 1000 and then SRDL (shift right double logical) "10" ... to divide by 1024 ... to convert the high word from timer units to seconds.

subsequent post (later in the day) on similar subject in bit.listserv.ibm-main
https://www.garlic.com/~lynn/2004m.html#40 Result of STCK instruction - GMT or local?

which has urls to the pop
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1?SHELF=EZ2HW125&DT=19970613131822

above summary has the clock as approximately 143 years

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 14 Oct 2004 12:00:22 -0600
Anne & Lynn Wheeler writes:
i did say that bit 51 tics once a microsecond ... and bit31 tics slightly less than once a second ... about 1024/1000 ... there is lots of code in the vm kernel that would multiple the high word by 1000 and then SRDL (shift right double logical) "10" ... to divide by 1024 ... to convert the high word from timer units to seconds.

talk about brain check ... i was looking at something else while i typed the above ... 1000/1024 (with srdl) was for converting seconds to timer units., 1024/1000 with sldl (multiple by 1024) and divide by 1000 was converting timer units to seconds.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 15 Oct 2004 08:30:19 -0600
Terje Mathisen writes:
That code surely must be converting from us to ms, not seconds!

bit 51 tics microsecond, bit 31 is slightly more than a second.

there is some amount of code that loaded just the first (high) 32 bits ... where bit 31 is approx. a second. from real seconds to timer value ... used 1000/1024 ... from timer value to seconds used 1024/1000 (there are slightly more seconds than bit 31 "tics"). refs:
https://www.garlic.com/~lynn/2004m.html#44 multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#43 multi-processor timing issue

when i did the resource manager ... i had a module called dmkstp (remove the 'dmk' ... and there was a tv commercial from the 60s). it frequently got control (on the order of every several seconds, depending on load) and did lots of calculations to track running resource utilization, estimating system bottleneck(s), basis of policy scheduling (including fair share, etc).

so there were lots of dmkstp values all over ... i didn't want to use a full double word ... but needed much better than second resolution. so the basis of the running resource utilization measure was taken to be about an 8minute base-line. if used a (32bit) word and shifted things 12 bits ... i could have microsecond resolution and period of (slightly more than) 2048 seconds (which seemed more than enuf for baseline calculations of 8minutes (320 seconds).

one of the first reported failures ... was a divide error ... which was something that wasn't trapped in the kernel .. and resulted in system failure (dump and reboot). it turns out that the installation was trying to diagnose some (i/o?) hardware problem and had pushed (machine) stop on the front console ... and left the machine in stop for nearly an hour. when they finally got around to pushing start ... dmkstp got control ... calculated elapsed time baseline for the period and then started to go around updating a lot of resource use statistics. since nearly an hour had elapsed since the last invokation ... some of the values were such that there was calculation(s) resulting in divide check (program check 9). so the possible fixes were:

1) tell customers to not place the machine in stop state for more than a half hour at a time (this only actually happened once out of all the customers)

2) do a lot of sanity check on the calculations to prevent a divide check

3) implement a kernel divide check, fixup, and restart handler.

misc. past resource manager posts:
https://www.garlic.com/~lynn/subtopic.html#fairshare

in any case, eventually it got somewhat drummed into you about planning for a possible worst case scenario was what happens if the stop button happened for an arbritrary period between any arbritrary instructions.

this became an (of of the) assurance scenario when i redid the i/o infrastructure for the disk engineering lab
https://www.garlic.com/~lynn/subtopic.html#disk

so that they could move development from stand-alone to operating system environment; what if the stop button was pushed just before an i/o operation and the processor remained in stop state for arbritrary period.

this also became an assurance scenario for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

with regard to recovery in a highly parallel cluster environment. if a machine was placed in stop state ... then the other members of the cluster would eventually decide that it had died and invoke cluster reconfiguration and recovery operation (possibly terminating locks and transactions in flight and restarting them). the issue was that if the stopped machine eventually woke up and happened to be just about to do an i/o operation ... say disk write to a shared disk. the cluster reconfiguration and recovery operation may have terminated the locks associated with allowing that write operation. The issue in such a cluster fall-over and recovery operation ... could this assurance scenario be handled in such a way that no harm happens.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Fri, 15 Oct 2004 08:48:58 -0600
Steve O'Hara-Smith writes:
if the 1000 users are all asking for an enourmous allocation then it will fail anyway, the doubling approach simply minimises the calls to the allocation mechanism. Asking for a gigabyte 1k at a time as the buffer fills is a lot worse than asking for 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, 1M, 2M, 4M, 16M, 32M, 64M, 128M, 256M, 512M, 1G. If 1G is past the allocation quota then it fails quicker with less impact on the system too.

back in the mid-80s, ims hot standby had to contend with fall-over issues in configurations involving possibly 32,000 or more terminals. ims was keeping all its state information warm and ready to go ... the problem was with vtam attempting to re-stablish 32,000 (or more) sessions "instantaneously". any guesses about the elapsed time because of the non-linear effects.

the situation is similar (but different) to the growth of large http/web operations in the early days of the web. tcp session initiation/termination had been viewed as relatively infrequently operation ... compared to rest of network processing. as a result, the typical implementation involved FINWAITs being on a linear list ... and processing involved linear scanning of the FINWAIT list (under the assumption that there would be trivial numbers of FINWAITs, if any). HTTP was basically a single turn-around target built on top of TCP ... which resulting in all the TCP session setup/tear-down happening for possibly single packet request/response.

all of a sudden a number of servers that looked good for toy web server demos .... were having a really hard time with scale-up ... finding that as soon as real load started to happen, their processors were spending 98percent of total cpu in FINWAIT list scanning.

there was one of the unix vendors that had encountered the problem prior to the web growth period ... supporting commercial environment with possibly 20,000 telnet sessions. how many remember the netscape browser download FTP servers? they added more and more with recommendations about trying different servers (i think eventually netscape1 thru netscape20?). they eventually brought in a single machine from the vendor that had done early session TCP scale-up support.

some (more) drift .... random refs to ims hot standby, peer-couple shared data, automated operator, etc ... including large financial transaction operation claiming in the '90s that it had gone over six years w/o a single outage (100 percent availability) and attributed to fact to

1) ims hot standby 2) automated operator

https://www.garlic.com/~lynn/submain.html#shareddata

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Open Sources Object Rexx

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: IBM Open Sources Object Rexx
Newsgroups: alt.folklore.computers
Date: Fri, 15 Oct 2004 11:09:11 -0600
IBM Open Sources Object Rexx
http://developers.slashdot.org/developers/04/10/15/1324224.shtml?tid=156&tid=136

Rex(x) started on cms (rex ... before its name change to rexx) about the same time the OCO (object code only) wars were beginning ... i.e. decision was made to start shipping products OCO ... and users as well as SHARE and GUIDE user groups were complaining.

i decided i wanted to do a demonstration that rex(x) was not just another pretty scripting language ... so i stated that i was going to re-implement the system kernel dump debug & analysis tool (written in assembler) in REX(x) .... the objective was to

1) total effort take less than 3 months elapsed time 2) using less than half time 3) have at least ten times more function 4) run at least ten times faster

... and if it was ever shipped to customers ... the code would have to be shipped ... since rex(x) was an interpretive language (even with the emerging OCO mandates ... the source code still had to ship).

turns out that I was never allowed to ship it to customers (although I widely distributed it inside the corporation) ... but I was allowed to give a very long and detailed implementation presentation at SHARE meeting ... and subsequently there were a number of customer implementations.

i put in a lot of progammable automagic problem analysis ... so it was pretty trivial that the automagic stuff would execute at least ten times faster than a human manually entering all the commands.

however, the real trick was to make the interpreted rex(x) implementation run ten times faster than the assembler code. to do this, i finally had a 120 line/instruction assembler stub code that was used by dumprx to do various things.

random past dumprx postings
https://www.garlic.com/~lynn/submain.html#dumprx

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Fri, 15 Oct 2004 12:58:34 -0600
glen herrmannsfeldt writes:
The problems of algorithms not being linear as they scale up seems to come up over and over. One that I have run into a few times now is that most unix systems still do linear search through directories assuming that the number of files per directory is small. I once had 100,000 files to read from one directory. It turned out to be faster to run the files through the tar program and process the tar output than to read them directly.

It seems that tar reads files in the order they are stored in the directory, without sorting the list first. There is a good chance that is also the order the files are stored on disk.


cms (cambridge monitor system ,,, and then changed to conversational monitor system with the morph to vm/370) had performance directory changes for this by the mid-70s (approx. 30 some years ago).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

EAL5

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EAL5
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 15 Oct 2004 13:21:31 -0600
tedmacneil@bell.blackberry.net (Ted MacNEIL) writes:
It reminds me of the time IBM got a high security rating for MVS, from the US government. I don't remember the actual rating (a, b, or c PLUS a number), but it was pre-ESA; only good if you stayed on that level of XA, a 3084, a specific level of microcode, a specific PUT level, and no user exits.

Tough criteria to meet. Just like EAL5. I used to work for a Bank, and while we were concerned about security, we didn't disable almost everything that made PR/SM the cat's-butt!


previous post:
https://www.garlic.com/~lynn/2004m.html#41 EAL5

i've been criticized for my characterization of common criteria compared to the orange book ... including having a merged security glossary and taxonomy with both orange book and common criteria terms and definitions intermixed ... see reference at
https://www.garlic.com/~lynn/index.html#glosnote

my characterization was that, in general, the orange book had a common reference model ... typical a multi-user, multi-level security, operating system ... and the security target was built around that reference.

there were (at least) two issues

1) it was quite difficult for many implementations to meet security objectives assuming multi-user, multi-level security, operating system ... and frequently it required compensating procedures to demonstrate compliance (since the basic infrastructure couldn't meet it)

2) lots of things emerged in the industry, requiring certification that didn't correspond to multi-user, multi-level security, operating environment

so (my characterization) is that common criteria is allowed to define a whole slew of different protection profiles .... that can be very specific to a particular operating environment ... and because of the definitions, would get a higher security evaluation ... than they could if they were held to a multi-user, multi-level security reference.

at a meeting about a year ago on common criteria evaluations ... there was a summary that out of 64 evaluations ... sixty of them required special deviations.

the issue brought up was that supposedly the evaluations allow a customer to make an informed (security related) decision about choosing between different evaluated products; but if the evaluation process isn't exactly the same for each product ... what help is it to the customer?

random postings about assurance in general:
https://www.garlic.com/~lynn/subintegrity.html#assurance

some past postings about the gnosis/keykos/eros linage of operating system ... where there has been some claim that eros is targeted at an EAL7 evaluation (the first two generations, gnosis and keykos were done on 370, eros follow-on is for intel architecture)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2003h.html#41 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone
https://www.garlic.com/~lynn/2003j.html#20 A Dark Day
https://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#22 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#26 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#54 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004e.html#27 NSF interest in Multics security
https://www.garlic.com/~lynn/2004m.html#29 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

EAL5

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EAL5
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 15 Oct 2004 13:50:20 -0600
Efinnell15 writes:
Wasn't it B1 for MVS/XA? There was a Green Book and a Yellow Book put out as collaberation between WSC and CIA. It was presented at SHARE and you get to the last chapter and "If you're connected to a network these criteria are invalidated".

What they're trying to validate is only the BCP w/o external influences. In this day of Plexes and interconnection more of an Ivory tower exercise. IIRC MULTICs was only one that exceeded that.


but what system did they actually use? .... hint: what share committee did various gov. TLA people belong to ... look for installation code CAD in the share archives:
http://vm.marist.edu/~vmshare/

somebody told me that even as recent as a couple years ago, they noticed the daily visitor list at the front gate had a seperator sheet from a certain operating system.

note of course ... they could get the actual source for system builds for this particular system.

there is a story that there was an extensive investigation into whether or not it would be possible to provide all the system source to a customer that exactly corresponded to (any) specific MVS release and it was eventually (after spending a very significant amount on the investigation) determined to not be practical.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

stop worrying about it offshoring - it's doing fine

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: stop worrying about it offshoring - it's doing fine
Newsgroups: alt.folklore.computers
Date: Fri, 15 Oct 2004 14:26:17 -0600
stop worrying about IT offshoring - it's doing fine
http://www.newsforge.com/article.pl?sid=04/10/15/1322259

related articles:
http://developers.slashdot.org/developers/04/10/15/1521231.shtml?tid=156&tid=187
http://www.usatoday.com/tech/techinvestor/industry/2004-10-14-programming-jobs_x.htm

as i mentioned in previous threads ... many of these articles tend to ignore the significant amount of outsourcing that went on with y2k remediation activities ... it possibly was viewed as a one-time, short-term bubble ... but it was very significant in establishing the business relationships ... especially with regard to core legacy business critical technology (while all the wiz-kids were focusing on building websites).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

merged security taxonomy and glossary updated

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: merged security taxonomy and glossary updated
Newsgroups: comp.security.misc
Date: Sat, 16 Oct 2004 11:42:30 -0600
i've update my merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote

with terms from nasa document a
http://www.grc.nasa.gov/WWW/Directives/2810.1-TOC.html

which supposedly expired four days ago ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

4GHz is the glass ceiling?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4GHz is the glass ceiling?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 17 Oct 2004 00:53:41 -0600
"del cecchi" writes:
Amdahl and the FS failure were the reason for the 3033 remap program. As I heard it the 3033 was done in like 18 months and was a card for card remap of a 370, perhaps the 168, from MST into MS255 HPCL-F.

i was told that it was 370/168 wiring diagram using technology with 4 circuits per chip ... to chip technology that was about 20 percent faster but had something like 40 circuits per chip. the initial remap would have just resulted in 3033 being 20 percent faster (w/o taking advantage of the additional chip density). somewhere in the development cycle ... there was work on performance optimization of various critical areas ... and the 3033 actually came out about 50 percent faster than 370/168.

there are stories about the pok lab director working 1st shift in his office and spending 2nd shift and parts of 3rd shift down with the engineers.

early in that time-frame, a couple of us from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had suckered some of the engineers into spending some extra curricular time on a completely different 16-way smp design ... which the lab director put to a stop when he finally heard about what was going on. a group of the processor engineers had a weekly after work bicycle ride that i was invited to go along.

supposedly FS was somewhat motivated by the 360 plug compatible controller business .... and there was some write-up trying to blame a project I worked on as an undergraduate creating the business
https://www.garlic.com/~lynn/submain.html#360pcm

and that Amdahl supposedly left in part because of FS (FS would have a significantly different processor design and significant higher integration between the processor, channels, and controllers)
https://www.garlic.com/~lynn/submain.html#futuresys

Amdahl gave a talk in a large MIT auditorium in the early 70s about starting the company and the objectives. Part of the discussion was something about the business plan for the VC investment (some of the audience grilled him on it being Fujitsu) ... claiming that there was already at least $100b already invested in 360/370 customer application software ... and even if IBM totally walked away from 360/370 (veiled reference to FS), there would be enuf 360/370 customer application software to keep Amdahl in 370 hardware business for the next 30 years.

FS was aborted ... and IBM stayed in the 360/370 processor business (and customers continued to spend tons of money on 360/370 platform software development; while some of the FS people migrated to Rochester and the folklore about FS was reborn as the S/38).

however, (for even more drift) the clone 360/370 processors (like Amdahl) could be considered to have heavily contributed to the transition to object code only.

unbundling (june 23rd, 1969) resulted in ibm starting to separately charge for various things, including application software ... however kernel/hardware support software continued to be free.

I got to be the guinea pig for kernel priced software with the resource manager.
https://www.garlic.com/~lynn/subtopic.html#fairshare

the business transition was that direct hardware support would continue to be free, but kernel software that wasn't involved in directly supporting hardware or hardware features would start being priced. so as part of putting out the resource manager, i got to spend some amount of six month period with the business and pricing people about pricing kernel software.

eventually all kernel software became priced ... however, there was some transition period ... slightly related recent posting
https://www.garlic.com/~lynn/2004l.html#70 computer industry scenario before the invention of the PC?

and then the push for OCO (object code only) and no longer shipping source.

slightly related recent mainframe posting
https://www.garlic.com/~lynn/2004m.html#50 EAL5
other parts of the EAL5 mainframe thread
https://www.garlic.com/~lynn/2004m.html#41 EAL5
https://www.garlic.com/~lynn/2004m.html#49 EAL5

minor recent reference to oco & shipping source
https://www.garlic.com/~lynn/2004m.html#47 IBM Open Sources Object Rexx

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 17 Oct 2004 10:37:37 -0600
"Tom Linden" writes:
Not sure why you call it a religious war unless it is for you. For me it is simply a preference based on an analysis of the structure of the different languages, having coded in many the last ~40 years and worked on many compilers(PL/I, Fortran, Pascal, Basic, C Cobol, Jovial), so my bias, if you wish to call it that, is based on experience not lack of understanding. C is a primitive language compared to PL/I and is inherently more unsafe. And yes, I own the PL/I compiler for OpenVMS and Tru64.

for some more topic drift ... some of the ctss people went to 5th floor, 545tech sq ... implementing multics in pli

others of the ctss people went to 4th floor, 545tech sq, to the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

the science center also had part of the 2nd floor occupied by machine room for the 360/67. it was originally a uniprocessor. however, there is this tale that when lincoln labs discontinued one of the processors in its 360/67 two-processor smp system ... somebody called the moving company and told them instead of moving the processor back to the kingston plant site ... they were to delivery it to 545 tech sq. I have some vaque recollection that installing some of the stuff involved removing window from the 2nd floor machine room and a crane lifting box thru the window (but that may have been some other equipment).

for a time, the boston programming center had part of the 3rd floor. jean sammet and nat rochester (among others) were in the boston programming center. the boston programming center was also responsible for conversational programming system (cps) ... that had an interactive/conversational pli and basic (supported terminals, swap processes in and out, ran under common, non-virtual memory, os/360 on standard 360 processors). they also had done a special 360/50 microcode RPQ for cps that significantly speeded up cps operation.

all the original cp/67 and cms work was done at science center ... but as it became more popular, there was a "development" group that spun off from the science center. in the morphing to vm/370 it started to grow rapidly and absorbed the cps people (and their space on the 3rd floor) ... dissolving the boston programming center (with some of the people, like nat rochester, going to the science center). the vm/370 group continued to grow rapidly, outgrowing the space on the 3rd floor and eventually moved out to the old sbc building in burlington mall (sbc having been transferred to cdc as part of litigation settlement).

there was the (in)famous cms "copyfile" command (re)done for (vm370) cms with the enormous number of options ... which was done by one of the former cps people.

in any case (looking in boxes for something else), I recently ran across a five page hardcopy document by him ... describing his port of cps to vm/cms ... and how to use it in cms environment. However, I don't remember it was ever released to customers.

in some sense, the cps port to cms would have been similar to the apl\360 port to cms (done by cambridge science center ... at the time apl\360 was being done out of the philly science center), that was released initially as cms\apl) i.e. eliminating all the swapping and task management ... since that was being handled by cp kernel ... just leaving single task interactive/conversation interface.

for some more topic drift ... cms\apl (which later morphed into apl\cms) became the mainstay of the HONE system ... eventually providing support for all field, marketing and sales people world wide. ... random past hone & apl posts
https://www.garlic.com/~lynn/subtopic.html#hone

in the late 70s, the US HONE datacenters were consolidated into a single datacenter in cal (while HONE was also replicated at numerous datacenters around the world). at one point the US HONE complex was pushing 40,000 defined users. the consolidated US HONE datacenter was also only a couple of miles from another vm-based timesharing service: Tymshare .... random vm-oriented timesharing posts:
https://www.garlic.com/~lynn/submain.html#timeshare

random past posts referencing cps, bps, sammet, rochester, 3rd floor 545tech sq, etc:
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#19 ITF on IBM 360
https://www.garlic.com/~lynn/2002o.html#76 (old) list of (old) books
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003c.html#1 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003h.html#34 chad... the unknown story
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
https://www.garlic.com/~lynn/2004.html#32 BASIC Language History?
https://www.garlic.com/~lynn/2004b.html#14 The BASIC Variations
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004e.html#37 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#47 PL/? History

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ZeroWindow segment

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ZeroWindow segment
Newsgroups: comp.protocols.tcp-ip
Date: Sun, 17 Oct 2004 11:03:34 -0600
Hassan I Sahba writes:
I just searched rfc793 for "zerowindow", no hits. Time to start reading I guess. Thank you.

i have index of rfc documents at
https://www.garlic.com/~lynn/rfcietff.htm

it includes some organization of RFCs by keyword

... in the above page ... the RFCs listed by section select Term (term->RFC#)

"windows" are some what covered under congestion and flow control (scroll down to appropriate entry). entry for flow control

flow control (see also congestion , traffic engineering)
3726 3520 3496 3477 3476 3474 3473 3468 3210 3209 3182 3181 3175 3159 3097 2997 2996 2961 2872 2816 2814 2753 2752 2751 2750 2749 2747 2746 2745 2490 2382 2380 2379 2210 2209 2208 2207 2206 2205 2098 1859 1372 1080 449 442 210 59

clicking on the RFC number, brings the RFC summary up in the lower frame. clicking on the ".txt=nnnn" entry (in the RFC summary) retrieves the actual RFC.

RFCs specifically mentioning zero and window
3284 3168 3135 2757 2525 2461 2367 1970 1795 1714 1144 1122 1073 1025 1013 983 905 896 892 813 793 761 675

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch
Date: Sun, 17 Oct 2004 15:27:56 -0600
"del cecchi" writes:
I would posit that 360 survived because of its installed base of business critical applications that were not easily replaceable.

if it was $100b invested in 360 application software in the early 70s (w/360 less than 10 years ago) ... aka previous ref to Amdahl talk at mit
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?

one might conjecture it has hit several trillion. ... a good part of it in fundamental business critical applications.

to redo and/or move the feature/function .... would need to show some sort of cost advantage and possibly also demonstratable risk mitigation ... for some stuff, the cost to the business of an outage can be much greater than any measurable migration benefit ... aka independent of the difficulty of replacement ... there are also some risk issues in making any change.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 08:22:00 -0600
jmfbahciv writes:
Sure. Anybody who has built something knows that. However, our culture never shows the boring mundane lifetimes that produced the spectacular wow-doohickey.

another goes something like

in theory, there is no difference between theory and practice, but in practice there is

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 08:40:33 -0600
Morten Reistad writes:
Rememeber, the people at the Bell Labs unit that made unix were also outside the mainstream computing world. Their framework was research to support communications; they were owned and funded by AT&T. I recognise the procedures and mindset from my time inside another large, but very gentlemanly and correct phone company (KPN).

They would have had to dismantle PL/1 pretty heavily to use it; and would be exposed to similar problems as with C in the process.

All technology has built-in problems. You just have to make a framework to deal with it. This is where C is a disaster; when taken outside such a framework.


total topic drift ... in the early 70s, the guys at AT&T longlines managed to get a custom modified vm/370 system (along with all source) from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

it had a bunch of stuff that we had done that hadn't yet been packaged for product release ... and some stuff that never got released (like page mapped filesystem ... the superset of stuff that eventually shipped as discontguous shared segments, etc). some random postings about various vm features:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

however, one feature that it didn't have was smp support
https://www.garlic.com/~lynn/subtopic.html#smp

so nearly 10 years later ... the national account manager for at&t longlines tracked me down on the west coast. it turns out that the use of this custom kernel was going strong in longlines ... and the branch office was trying to sell them 3081s (which required smp support). Over the years, in addition to all the custom stuff that came with the original system ... they had added a significant amount of their own custom features. The result was that the migration to a standard product vm system with smp support was going to be an expensive and time-consuming undertaking. the national account manager was looking for people that had been associated with the original system changes and any aid he could get migrating longlines to a traditional system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 09:24:13 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Nope. That was not the main reason, at least early on.

The PC stream was marketed to people who didn't know that it wasn't reasonable to have to power cycle a computer once every hour or so, and the workstation ones to hackers who expected to have to modify the software they used to make it work at all.

Yes, System/360 and the early System/370 was like that, but it was somewhat better by the early 1980s. And so were DEC's non-Unix systems, and we know how they spread.

By the late 1980s, both the PC and RISC systems had improved very considerably, and that is when the mainframes started suffering badly.


there was a big explosion starting maybe as early as '79 in distributed and departmental mainframes ... this is the niche that the 4341 and vax'es were selling into. there were customers buying 4341s in quantities of multiple hundreds at a time (somewhat that the "mainframe" price/performance had dropped below some threshold)... example:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

by the mid-80s, the high-end pcs and workstations were starting to take over this market .... the 4341 follow-on .... 4381 didn't see anything like the explosion that the 4341 saw (even as replacement for 4341 as it went to end-of-life).

the other area was business PC applications ... one of the things that help the PC with market penetration was mainframe terminal emulation ... business for about the same price as a mainframe terminal ... could get in a single desk footprint ... single screen, single keyboard ... something that provided both local computing capability as well as mainframe terminal access.

one of the issues was that as the PC business market evolved, you started to see more and more business applications running on the PC .... in part because of the human factors ... things like spreadsheets, etc.

this resulted in a significant amount of tension between the disk product division and the communication product division. the disk product division wanted to introduce a brand new series of products that gave PCs "disk-like" thruput and semantics to the glass house mainframe disk farm. This would have an enormous impact on the install base of communication division terminal controllers (which all the PC terminal emulation connected to).

in the 80s ... somebody from the disk product division gave a featured presentation at an internal worldwide communication product conference claiming that the communication division was going to be the death of the disk division (of course the presentation wasn't actually titled that or they wouldn't have allowed him on the agenda).

The issue was (with the migration of applications to the PCs) that if the (PC) access to the corporate data in the glass-house wasn't provided with significantly better thruput and semantics (than available with terminal emulation hardware & semantics) ... the data would soon follow ... aka you would start to see a similar big explosion in PC hard disks ... that you started to see in PC computing.

so to some extent SAA was supposed to address this ... not so much for providing better access to the data in the glass house disk farms, but enable the possibility of migrating applications back to the mainframe ... leaving the PC/mainframe interface as a fancy gui ... which could preserve the terminal emulation install base. random posts on SAA, 3-tier architecture, middle layer, etc
https://www.garlic.com/~lynn/subnetwork.html#3tier

US & world-wide vax numbes by year (from a 1988 IDC report):
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

i don't have the comparable 4341 numbers ... but there was some presentation (at share?) claiming that something like 11,000 of the vax shipments should have been 4341 ... because of the better 4341 price/performance.

random topic drift ... the 4341 follow-on, the 4381 was somewhat originally targeted to have been a fort knox machine. fort knox was program to consolidate the large number of corporate microprocessors onto an 801 risc base (i.e. a large number of 360 & 370 models were actually some sort of processor with microprogramming to emulate 360/370 architecture). I contributed to a document helping kill fort knox ... at least for 370; not so much that i was against 801s ... but that chip technology had advanced to the point where you could start to get 370 actually directly implemented in silicon ... and enable elimination of the expensive emulation layer. random 801 & fort knox posts:
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Shipwrecks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Shipwrecks
Newsgroups: alt.folklore.computers
Date: Mon, 18 Oct 2004 08:29:51 -0600
Mike Kingston writes:
In my experience, C has been the nearest thing to OS/360-70-90 macro assembler for writing performance code. Best elapsed time has been through prototyping with APL (I guess PL/I might have served there, but I no longer have access to it) and then writing in C for production code. Sometimes I just used APL if the production run wasn't going to be too great.

but an issue in os/360 genre (whether PLI or assembler) is that most of the structures &/or semantics involved explicit lengths ... buffers tended to have builtin headers with maximum lengths and strings always had current lengths. the programmer either had programming semantics that implicitly dealt with lengths (via the underlying structures) but had definitions that the programmer had to code for involving length mismatch (or take system default action) ... or the semantics tended to have explicit lengths that the programmer had to account for.

a programmer tended to have to work significantly harder in those environments to come up with length violation.

there are a number of theoritical conditions where a programmer could obviously generate length violations ... but in practice the frequency such things happened has been possibly two orders of magnitude less than the typical C language environments.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 12:36:58 -0600
haynes@alumni.uark.edu (Jim Haynes) writes:
This reminds me of something older that may be relevant. I once read a document by George Mealy about the travails of OS/360 and some comparisons with what was then the PDP-10 operating system. What struck me was that the PDP-10 system was designed as a remote terminal oriented system from the ground up. Whereas with OS/360 you had what was basically a card driven system and had to graft on more layers of software to get it to deal with operation from terminals. Now I don't pretend to know anything about IBM software, but I got the impression that later on you had to have something called CICS to do what the DEC software already was doing built-in; and even in my last contacts with IBM stuff there seemed to be files that were card images and printer line images. And CICS required its own set of experts as if it were another operating system running on top of the OS.

there were lots of infrastructures that built their own online operation on top of os/360 ... cps, apl\360, cics, etc. they had subsystems that did their own tasking, scheduling, swapping, terminal, etc .. recent post on cps
https://www.garlic.com/~lynn/2004m.html#54

while i was an undergraduate, the university got to be one of the original ibm beta-test sites for what was to become the cics product. the university sent some people to ibm class to be trained in cics ... but I was the one that got to shoot cics bugs. cics had been developed at a customer site for a specific environment ... and ibm was taking that and turning it into a generalized product.

the university library had gotten a grant from onr to do online library. some of the problems was that the library was using bdam operations that hadn't been used in the original cics customer environment.

for some topic drift ... a totally different (bdam) library project from that era ... nlm
https://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
https://www.garlic.com/~lynn/2004e.html#53 c.d.theory glossary (repost)
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 12:50:11 -0600
ref:
https://www.garlic.com/~lynn/2004m.html#59 RICSs too close to hardware?

this somewhat exemplifies the communication division orientation in the mid-80s
https://www.garlic.com/~lynn/94.html#33b Hight Speed Data Transport (HSDT)

at the time we were doing high speed data transport project.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we had a high-speed backbone running at the time of nsfnet-1 rfp ... but weren't allowed to bid, however we got NSF technical audit which concluded that what we had running was at least five years ahead of bid submissions to build something new. random posts
https://www.garlic.com/~lynn/internet.htm

in this ref:
https://www.garlic.com/~lynn/2001m.html#15

the particular gov. operation would have had about as many 4341 nodes as there were total arpanet nodes at the time.

the internal explosion in the use of 4341s also help fuel the explosive growth in the size of the internal network .... where the internal network was nearly 1000 nodes at the time arpanet was around 250 nodes (about the time of the big 1/1/83 switch-over to internetworking protocol and gateways). ... random internal network refs:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

we had another problem involving HSDT with SAA in the late '80s. my wife had written and delivered a response to a gov. RFI ... where she had laid out many of the basics of what became 3-tier architecture, middle layer, middleware, etc. we expanded on that and started giving 3-tier architecture marketing presentations. we also started taking some amount of heat from the SAA crowd at the time ... who somewhat could be characterized as attempting to put the client/server (2-tier) genie back into the bottle (and our pushing 3-tier architecture was going in the opposite direction) ....
https://www.garlic.com/~lynn/subnetwork.html#3tier

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Oct 2004 13:02:39 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
And it was absolute nonsense. Where the VAX scored over the 4341 was in the superiority of VMS, not in its hardware. If you costed in the support effort and did NOT assume that you were starting with people who knew VM/CMS, you got a very different result.

Don't get me wrong - VM/CMS wasn't bad, but VMS was a much better system for a great many purposes. A user unfamiliar with either would typically take 1/3 the time to start using VMS effectively as VM/CMS (or Unix, for that matter). Let's leave MVS and TSO out of this one ....


i think that share eventually produced a report/presentation (as well as some number of requirements for ibm) about the strengths of vms (it may have been phrased w/o directly mentioning vms ... just listed things that should be done by ibm for its products to make it more competitive in the mid-range).

big issues all sorts of skill level, significant up front learning and just the number of person hrs required for the care and feeding of systems.

a customer with 20-50 people caring and feeding a single big mainframe complex couldn't continue to follow the same paradigm when it was cloned a couple hundred or thousand times.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, next, index - home