List of Archived Posts

2006 Newsgroup Postings (10/03 - 10/22)

What data is encoded?
Info on Compiler System 1 (Univac, Navy)?
History of IBM Systems
THE on USS?
Why not 2048 or 4096 bit RSA key issuance?
Why not 2048 or 4096 bit RSA key issuance?
Greatest Software Ever Written?
Very slow booting and running and brain-dead OS's?
Google launches search engine for finding source code
Why not 2048 or 4096 bit RSA key issuance?
Why not 2048 or 4096 bit RSA key issuance?
Why not 2048 or 4096 bit RSA key issuance?
Languages that should have made it but didn't
Newisys Horus & AMD Opteron
Ultra simple computing
THE on USS?
memory, 360 lcs, 3090 expanded store, etc
bandwidth of a swallow (was: Real core)
IDC: Virtual machines taking over the world
Very slow booting and running and brain-dead OS's?
real core
Very slow booting and running and brain-dead OS's?
Why these original FORTRAN quirks?
Why magnetic drums was/are worse than disks ?
Curiousity: CPU % for COBOL program
VM SPOOL question
Why these original FORTRAN quirks?
Why these original FORTRAN quirks?
Storage Philosophy Question
Why these original FORTRAN quirks?
Why magnetic drums was/are worse than disks ?
Why magnetic drums was/are worse than disks ?
Why magnetic drums was/are worse than disks ?
Why magnetic drums was/are worse than disks ?
Basic Question
Turbo C 1.5 (1987)
Turbo C 1.5 (1987)
Turbo C 1.5 (1987)
Design life of S/360 components?
Why these original FORTRAN quirks?
Ranking of non-IBM mainframe builders?
Ranking of non-IBM mainframe builders?
Ranking of non-IBM mainframe builders?
Ranking of non-IBM mainframe builders?
Universal constants
Why magnetic drums was/are worse than disks ?
Why these original FORTRAN quirks?
5692 and 6SN7 vs 5963's for computer use
5692 and 6SN7 vs 5963's for computer use
Why these original FORTRAN quirks?
Ranking of non-IBM mainframe builders?
Ranking of non-IBM mainframe builders?
5692 and 6SN7 vs 5963's for computer use
Is the teaching of non-reentrant HLASM coding practices ever defensible?
Why these original FORTRAN quirks?
Is the teaching of non-reentrant HLASM coding practices ever defensible?
Turbo C 1.5 (1987)
Turbo C 1.5 (1987)
IA64 and emulator performance
Why magnetic drums was/are worse than disks ?
IA64 and emulator performance
Is the teaching of non-reentrant HLASM coding practices ever defensible?
Microsoft to design its own CPUs - Next Xbox In Development
Microsoft to design its own CPUs - Next Xbox In Development
Is the teaching of non-reentrant HLASM coding practices ever defensible?
Paranoia..Paranoia..Am I on the right track?.. any help please?
Why these original FORTRAN quirks?; Now : Programming practices

What data is encoded?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What data is encoded?
Newsgroups: sci.crypt
Date: Tue, 03 Oct 2006 18:30:48 -0600
"Dave" <dave42972000@yahoo.com> writes:
Where can I find a descrition of the RC4 and MD5 algorithms, and just how strong are they?

rfc
1321 I
The MD5 Message-Digest Algorithm, Rivest R., 1992/04/16 (21pp) (.txt=35222) (See Also 1320)


one of the things i've done for my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

is a collection of all RFC summaries that reference md5
https://www.garlic.com/~lynn/rfcmd5.htm

in the rfc summaries, clicking on the ".txt=nnnn" field retrieves the actual rfc.

Info on Compiler System 1 (Univac, Navy)?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Info on Compiler System 1 (Univac, Navy)?
Newsgroups: alt.folklore.computers
Date: Wed, 04 Oct 2006 14:02:57 -0600
"Sarr J. Blumson" <sarr@punchout.gpcc.itd.umich.edu> writes:
Jean Sammet's book "A History of Programming Languages" might help, but I don't actually have a copy.

for cross thread drift ... recent comment about boston programming center on 3rd flr of 545 tech sq
https://www.garlic.com/~lynn/2006r.html#41 Very slow booting and running and brain-dead OS's?

sammet was at the boston programming center as the vm370 development group was absorbing it. a few people (sammet, rochester, etc) in the boston programming center got attached to the science center as the boston programming center was being absorbed ... and then moved out to old sbc bldg. in burlington mall.

History of IBM Systems

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: History of IBM Systems
Newsgroups: alt.folklore.computers
Date: Thu, 05 Oct 2006 08:55:49 -0600
History of IBM Systems
http://www.osnews.com/story.php?news_id=16072

New to IBM Systems
http://www-128.ibm.com/developerworks/eserver/newto/?ca=dgr-lnxw01IBM-Systems

a recent comp.arch post with a little more detail for some of the pieces
https://www.garlic.com/~lynn/2006r.html#49 Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's

THE on USS?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: THE on USS?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 05 Oct 2006 09:36:05 -0600
Dan Espen <daneNO@MORE.mk.SPAMtelcordia.com> writes:
Yes, I should have been clearer.

I prefer ISPFs exclude even though xemacs has multiple ways to hide text, the simpler whole line approach and the first/last stuff just seems better.

The ISPF bounds stuff and shifting is pretty good too. emacs has a bunch of 'rectangle' facilities, but ISPF wins again with a simple powerful interface.

If I'm doing CLISTs and PANELs, ISPFs models/help wins again.

On the other hand being able to use the same editor for editing, mail, news, file managment, downloads, etc. is what puts XEmacs light years ahead.

My opinion, anyway.


for nearly anything that emacs doesn't do ... that ISPF might ... you write some lisp code. nearly 20 years ago, there was some XEDIT stuff that weren't in emacs ... that somebody wrote some lisp code to emulate. trivial one that i have still laying around is ALL (display just the lines containing search argument)

display-function for all ...
all is an interactive compiled Lisp function in 'all'. (all REGEXP &optional NLINES)

Show all lines in the current buffer containing a match for REGEXP.

If a match spreads across multiple lines, all those lines are shown.

Each line is displayed with NLINES lines before and after, or -NLINES before if NLINES is negative. NLINES defaults to 'list-matching-lines-default-context-lines'. Interactively it is the prefix arg.

The lines are shown in a buffer named '*All*'. Any changes made in that buffer will be propagated to this buffer.


... snip ...

Why not 2048 or 4096 bit RSA key issuance?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not 2048 or 4096 bit RSA key issuance?
Newsgroups: microsoft.public.platformsdk.security,microsoft.public.security.crypto
Date: Thu, 05 Oct 2006 16:02:43 -0600
"Mitch Gallant" <jensigner@community.nospam> writes:
I thought I had heard somewhere that the CA should have a larger keysize than the issued key size under it .. maybe my memory fails me on this. That is why I pointed that out.

in theory of security proportional to risk ... somebody would choose a key size supposedly sufficient to protect what is at risk, i.e. what bad things can happen if somebody compromised your key, with an implicit assumption that larger key sizes are harder to compromise (and therefor provide greater security)

the issue with PKI, certification authorities, digital certificates, etc ... is that the compromise of a CA signing key brings down the whole infrastructure ... not necessarily just limited to every key/certificate that a specific PKI/CA key has signed

a key/certificate signing key is preloaded into hundreds of millions of systems. all of these systems are setup that anything they get that has been signed by one of the preloaded keys ... is assumed to be trusted. if any one of the preloaded keys has been compromised ... an attacker can generate counterfeit keys/certificates for all possible covered operations (potentially a counterfeit for every possible key/certificate that has ever existed could be created ... but instead of including the correct public key ... the attacker could substitute any key of their choosing).

as a result, what is at risk ... isn't even limited to just the keys/certificates that may have already been signed by a specific compromised PKI/CA key ... but the totallity of all possible digital certificates that an attacker might generate with a compromised key ... and ALL possible susceptible systems and infrastructures that have been programmed to trust a specific PKI/CA key

... aka it isn't just the issued key "sizes" under it (i.e. all the keys/certificates signed with the specific CA key) .... it is ALL possible keys/certificates that an attacker might possibly generate ... and ALL possible systems and infrastructures that are vulnerable because they are setup up to trust some specific PKI/CA key.

misc. past collected posts mentioning threats, vulnerabilities, compromises, exploits or fraud
https://www.garlic.com/~lynn/subintegrity.html#fraud

misc. past collected posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

for a little drift, recent thread on
https://www.garlic.com/~lynn/aadsm25.htm#37 How the Classical Scholars dropped security from the canon of Computer Science
https://www.garlic.com/~lynn/aadsm25.htm#38 How the Classical Scholars dropped security from the canon of Computer Science

Why not 2048 or 4096 bit RSA key issuance?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not 2048 or 4096 bit RSA key issuance?
Newsgroups: microsoft.public.platformsdk.security,microsoft.public.security.crypto
Date: Thu, 05 Oct 2006 16:39:25 -0600
"Mitch Gallant" <jensigner@community.nospam> writes:
Not always the case .. e.g. heavily used online banking service:
https://www4.bmo.com/

SSL server cert is 1024 bit issued by a *1024* bit CA "Entrust.net Secure Server Certification Authority" Validity Period 1999 - 2019

So obviously, even in the hopefully high security future looking online banking industry, we can easily find poor key practices .


as per previous post
https://www.garlic.com/~lynn/2006s.html#4 Why not 2048 or 4096 bit RSA key issuance?

what is at risk isn't just limited to the keys/certificates that have been signed by a specific PKI/CA key .... what is at risk are all the systems and infrastructures that have the specific PKI/CA key preloaded and are programmed to trust their preloaded PKI/CA keys.

a compromised PKI/CA key can be used by an attacker to turn around and sign nearly any sort of key/certificate ... and then ALL the victim systems/infrastructures are at risk ... because they are (generally) configured to (equally) trust all their preloaded PKI/CA keys.

an institution choosing to contract from a PKI/CA using a larger signing key size ... makes little difference; as traditionally the case, security is only as good as the weakest link. since browsers, systems, and other infrastructures tend to treat all preloaded PKI/CA keys equally ... the compromise of any single PKI/CA key puts the whole infrastructure at risk ... not just the keys/certificates that have been signed by the specific compromised PKI/CA signing key.

In theory, any compromised PKI/CA signing key can be used to generate all possible counterfeit digital certificates (with any public key of the attackers choice) ... and the systems/infrastructure don't differentiate between what specific preloaded PKI/CA signing keys have signed ... all preloaded PKI/CA signing keys are treated equally.

a similar, but different kind of security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

and yet another variation involving yes card compromise, again the issue isn't proportional to the original card ... it is proportional to all the possible point-of-sale terminals that can be attacked with (large numbers of) counterfeit cards
https://www.garlic.com/~lynn/subintegrity.html#yescard

Greatest Software Ever Written?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Greatest Software Ever Written?
Newsgroups: alt.folklore.computers
Date: Thu, 05 Oct 2006 18:06:47 -0600
Charlton Wilbur <cwilbur@mithril.chromatico.net> writes:
I should perhaps add that UMass's building problems are caused both by area-inappropriate building styles (flat roofs) and area-inappropriate building materials (concrete). I think they could have gotten away with either, but not both.

how 'bout repaving mass pike nearly every year (because of frost heaves)?

or the windows popping out of the prudential building ... or even bldg. ?? on the mit campus.

not to be too off topic, but bringing up some recent posts about computers and software
https://www.garlic.com/~lynn/aadsm25.htm#37 How the Classical Scholars dropped security from the canon of Computer Science
https://www.garlic.com/~lynn/aadsm25.htm#38 How the Classical Scholars dropped security from the canon of Computer Science
https://www.garlic.com/~lynn/aadsm25.htm#39 How the Classical Scholars dropped security from the canon of Computer Science

misc. past posts mentioning issues with mass pike:
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#68 Killer Hard Drives - Shrapnel?
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2003j.html#11 Idiot drivers
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#39 The Pankian Metaphor

Very slow booting and running and brain-dead OS's?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very slow booting and running and brain-dead OS's?
Newsgroups: alt.folklore.computers
Date: Fri, 06 Oct 2006 08:19:07 -0600
jmfbahciv writes:
Now think about why the OS can't be restarted. I see no problem with "restarting" sectors of it, if the code can be isolated. Save the state of the machine and its data is the problem. Our implementation started to figure this out.

part of the micro-kernel genre ... core stuff was smallest unit possible and everything else was in partitioned address spaces.

the other was replicated kernels on replicated hardware ... basically do rolling restart.

similar, but different was loosely-coupled configurations (form of replication ... but also used for capacity) and migrate processes/applications across different processor complexes. this was done by some of the cp67/vm370 based commercial timesharing service bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

as they moved to world-wide, 7x24 operation in the early to mid-70s. particular pieces of hardware (including whole complexes) had to be taken out of service for periodic maintenance (but also applicable to kernel maintenance). recent post in this tread
https://www.garlic.com/~lynn/2006r.html#41 Very slow booting and running and brain-dead OS's?

i.e. from user/application standpoint using replicated systems (and things like process/workload migration) for masking system downtime/outage can be just as effective as making each individual component instantaneously restartable. this was also part of our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

in the genre of partitioning and moving services out of the kernel ... post somewhat about contrast of moving SSL processing into the kernel ... vis-a-vis attempts at moving tcp/ip protocol stack out of the kernel in coyotos a partitioned, highly secure, capability based system
https://www.garlic.com/~lynn/2006p.html#13 What part of z/OS is the OS?

that traces its heritage back thru keykos and gnosis

in the early/mid 80s ... i had undertaken to move a large remaining component out of the vm370 kernel ... the "spool file system". the justification was that vm370 networking was dependent on spool file system and it was starting to represent a system thruput bottleneck for our HSDT project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

I needed to improve the thruput by 10-100 times ... which required a significant restructuring. i figured that while i was at it, i would also move the processing out of the kernel. this was also somewhat improve the vm370 reboot time ... which was already pretty fast ... as the previous post (in this thread) also alludes to.

misc. past posts mentioning spool file system rewrite:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#36 Code density and performance?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#46 Various kinds of System reloads
https://www.garlic.com/~lynn/2005s.html#50 Various kinds of System reloads
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
https://www.garlic.com/~lynn/2006o.html#64 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006q.html#27 dcss and page mapped filesystem

Google launches search engine for finding source code

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google launches search engine for finding source code
Newsgroups: bit.listserv.ibm-main
Date: Fri, 06 Oct 2006 08:27:21 -0600
jchase@ibm-main.lst (Chase, John) writes:
STM R14,R12 will get you 50 or so.... but STM 14,12,12(13) returns nothing.

see recent thread posting in comp.arch
https://www.garlic.com/~lynn/2006r.html#8 should program call stack grow upward or downwards?

possibly just hasn't perculated thru the index yet.

i archive posts and then do various kinds of cross-index on garlic.com web pages with hrefs. there is extremely high ratio of hrefs to aggregate size of all files (one file has something like 35k hrefs). from the hit log ... it would appear that major web crawlers use pages as daily regression test.

Why not 2048 or 4096 bit RSA key issuance?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not 2048 or 4096 bit RSA key issuance?
Newsgroups: microsoft.public.platformsdk.security,microsoft.public.security.crypto
Date: Fri, 06 Oct 2006 10:15:31 -0600
lelteto <lelteto@discussions.microsoft.com> writes:
It is true that "compromise" of a CA signing key has greater consequences - and that's why most (sensible) CAs use 2048-bit keys. And hopefully they keep that key in hardware-only so it cannot be exported in the clear.

Most other keys' compromise, however, happens not because the private key is recovered (calculated) from the public key but because of poor security of the private key (eg. stored on-disk, computer compromised - in that case even password protecting the key will not help). In the latter case key size is indifferent...


re:
https://www.garlic.com/~lynn/2006s.html#4 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#5 Why not 2048 or 4096 bit RSA key issuance?

yes ... frequently there are much easier paths to compromise than brute force ... however, the issue of any PKI/CA key (preloaded into millions of different browsers, systems, infrastructures) ... being compromised ... regardless of the kind of compromise ... still puts the whole infrastructure at risk ... i.e. the infrastructure is vulnerable to its weakest link ... whether the kind of weakest link represents key size or any of myriad of other security processes.

Why not 2048 or 4096 bit RSA key issuance?

Refed: **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Why not 2048 or 4096 bit RSA key issuance?
Date: Sat, 07 Oct 2006 17:04:24 -0700
Newsgroups: microsoft.public.platformsdk.security,microsoft.public.security.crypto
re:
https://www.garlic.com/~lynn/2006s.html#4 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#5 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#9 Why not 2048 or 4096 bit RSA key issuance?

so if we talk about a system availability/integrity ... a specific system may have two nines (0.99) availability. if you use two such systems in a replicated fashion ... then non-availability is the probability that all systems fail at the same time ... the product of the inverse of their availability ... i.e. .01*.01=.0001 ... which inverted to give availability is four nines (0.9999). a little drift ... to having done scalable high availability product in the past
https://www.garlic.com/~lynn/subtopic.html#hacmp

a treat model for the PKI/CA case ... is you have an infrastructure with hundreds of millions of systems, each with their own local copy of say around forty (or more) pre-loaded PKI/CA keys .... then the whole infrastructure can be compromised if any one of those PKI/CA keys is compromised. The issue is that the most PKI/CA implementations don't differentiate between their preloaded PKI/CA keys ... they are effectively all treated as equal.

so it can make little difference that a single one of the certification authorities has 4096bit key and physically protect it with many levels of hardware and armed guards ... if none of the other thirty-none do. This is the well known security line about infrastructures only being as strong as the weakest link.

the preloaded forty PKI/CA keys aren't analogous to the redundant/availability scenario where it would require compromising all forty PKI/CA keys before the infrastructure is compromised. in the typical PKI/CA impelementation scenario ... it only requires compromise of just one of the preloaded PKI/CA keys to result in a compromise of the whole infrastructure.

the threat model isn't an attack against the certification authorities ... the threat model is to use a compromise of any one of the preloaded, (equally) trusted PKI/CA keys to attack all of the hundreds of millions of systems that are part of the PKI/CA infrastructure sharing the same set of PKI/CA keys. Say business "123" had a digital certificate signed/issued by an extremely high integrity certification authority "ZED". An attacker can compromise the private key of any certification authority (possibly one with extremely low integrity). The attacker then can generate their own counterfeit digital certificate for business "123" and impersonate that business. The issue is that in the PKI/CA operations, the hundreds of millions of systems with preloaded trusted PKI/CA public keys don't differentiate between those public keys and/or who has issued a digital certificate.

the first approximation to the threat level to the overall infrastructure is to take the ease of compromising the weakest link (PKI/CA key) by any means possible (brute force attack on the key, physical attack against the key, insider subterfuge, ....)

the actual probability of compromise to the overall infrastructure is the sum of the probabilities of compromising the individual PKI/CA keys. to the first approximation, doubling the number of preloaded PKI/CA keys in all of the hundreds of millions of system components ... doubles the probability of an overall infrastructure compromise.

This is not the redundant/available scenario where all components have to be compromised/fail before there is a system loss ... this is the scenario where the compromise of any one component can compromise the overall infrastrucutre ... and as such, if there were a doubling of the number of such components ... there is a corresponding doubling the possibility that a overall infrastructure compromise could occur. (this is independent of the variationd/differences that there might a huge difference in the failure/compromise probability of different PKI/CA operations; aka weakest link scenario)

there have been discussions in the past about what might be the situation for PKI/CA keys that belong to operations that have gone out of business ... but the related PKI/CA public keys are still carried in hundreds of millions of software components. Is it possible that the PKI/CA private key physical protection might be relaxed if the business no longer exists (and might it be easier to take down the whole infrastructure by attacking thru such a key ... than trying to go after

Why not 2048 or 4096 bit RSA key issuance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Why not 2048 or 4096 bit RSA key issuance?
Date: Sat, 07 Oct 2006 17:25:17 -0700
Newsgroups: microsoft.public.platformsdk.security,microsoft.public.security.crypto
re:
https://www.garlic.com/~lynn/2006s.html#4 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#5 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#9 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#10 Why not 2048 or 4096 bit RSA key issuance?

as part of doing the aads chip strawman in the 98 timeframe ...
https://www.garlic.com/~lynn/x959.html#aadsstraw

we had a requirement being able to do a digital signature within transit gate timing (100-200 milliseconds) and within power profile of an iso14443 proximity chip.

in that time-frame RSA was taking significantly longer. one of the attempts to get down the RSA time-frame was to .. was add an 1100-bit multiplier to such chips ... however that significantly drives up the power requirements ... making proximity impractical.

so the only thing that turned out to be practical for the AADS chip strawman was ECC.

as to the other, we were called in to consult with this small client/server startup that wanted to do payments on their server ... they had this technology called SSL and the payment stuff has since come to be called electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

a fundamental part of SSL was that it would provide some assurance that the website that the user thot they were talking to, was in fact the website they were talking to. the browser validates the server's ssl domain name certificate and then checks that the domain name from the URL that the user typed in is the same as the doman name from the digital certificate.

the problem was that most merchant sites found out that SSL cut their thruput by 80-90 percent ... so instead of using SSL for the whole shopping experience (starting with the URL typed in by the user) ... it was reduced to only being used for checkout/payment part ... where the user clicked on a (payment/checkout) button provided by the merchant site.

the problem now is that both the URL and the digital certificate is being provided by the webserver .... and it would take a really dumb attacker not to make sure that the URL they were using didn't match the domain name in the certificate they provided. So for the most part SSL is no longer being used to validate that the website that the user thinks they are talking to, is in fact, the website they are talking to; instead SSL is being used to validate that the whoever the website claims to be, is in fact, the website that it claims to be.

some of the (banking) email phishing/scams have taken advantage of the early ecommerce example and include a URL in the body of the email that can be clicked on. they then have a website that may spoof some better known website (possibly using MITM attack), for which they have a valid digital certificate.

the spamming countermeasure has been if the body of the email actually lists the URL as part of the "click" ... and that URL doesn't actually match the URL given the browser, they raise some sort of warning. however, if the click field just contains some sort of other text ... there is nothing to compare against.

collection of past postings mentioning ssl domain name digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

Languages that should have made it but didn't

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 8 Oct 2006 06:23:14 -0700
Subject: Re: Languages that should have made it but didn't
Tim Bradshaw wrote:
Lisp systems (for instance) have all these issues as well (though I suspect recent Java GCs are more sophisticated than those of many current Lisp systems).

as do APL. early apl\360 swapped relatively small workspaces, 16k-32k bytes ... so the issue of using all of (availabile workspace) memory and then garbage collect was relatively trivial.

in early 70s, porting apl\360 to cms for cms\apl product ... eliminated lots of stuff like the monitor that did tasking and swapping (since that was handled by underlying cp67). however, the garbage collection had to be extensively reworked for the relatively large (several hundred kbytes to a few mbytes) virtual paged memory. the original apl\360 code exhausting a workspace under cms\apl resulted in horrible paging characteristics. the garbage collection was one of the things that went thru a number of iterations attempting to adapt it to a page virtual memory environment

lots of past posts about apl and/or HONE (large online internal timesharing service ... mostly cms\apl .. later apl\cms, etc as apl evolved ... based applications ... providing support for world-wide field, sales, and marketing)
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past posts mentioning garbage collection
https://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe)
https://www.garlic.com/~lynn/99.html#20 APL/360
https://www.garlic.com/~lynn/99.html#38 1968 release of APL\360 wanted
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#18 Drawing entities
https://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2001n.html#4 Contiguous file system
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2002q.html#47 myths about Multics
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003g.html#5 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004q.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#34 Not enough parallelism in programming
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006o.html#13 The SEL 840 computer
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents

Newisys Horus & AMD Opteron

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 8 Oct 2006 08:06:26 -0700
Subject: Re: Newisys Horus & AMD Opteron
Zak wrote:
If only driving one would require a truck-class driver's license, and presumably the same speed limit as for trucks. Or a more radical idea: replace speed limits with energy limits (or would that be impulse limits...)?

some number of SUVs meet truck weight classification and are eligible for significnat tax break as commerical vehicle. there was an effort in cal. to get this strictly enforced ... among other things, such commercial vehicles are prohibited on most residential streets (i.e. if you were going to buy such a vehicle ... you could take the tax break ... but you also had to leave it on the border of your residential area and walk the rest of the way).

Ultra simple computing

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 8 Oct 2006 08:17:50 -0700
Local: Sun, Oct 8 2006 11:17 am
Subject: Re: Ultra simple computing
russell kym horsell wrote:
Reputation of a source is a poor 2nd to looking for inconsitencies in the proffered argument. E.g. in this case replacing a "complicated processor" with a "mess of simple chips" doesn't seem to get away from complexity at all, but does manager to move off into the stratosphere in terms of hand-waving naive vagueness.

Unreliability can not be avoided. Complexity can not be avoided. These things can only be mitigated. To solve certain types of problems requires certain types of resources. The best you can hope to do (and best itself is a vague concept, as it turns out) is avoid *unnecessary* unreliability or complexity.


partitioning has been a long held method of managing complexity. however, the issue can arise where the partitioning techniques are orthogonal to the problem at hand. the issue does the partitioning paradigm somehow relate to the structure/nature of the problem.

when we were doing ha/cmp product ... the partitioning for no single point of failure attempted to match the partitioning against the components at hand and the problems to be addressed. there had to be a fairly detailed analysis of system operation and failure modes
https://www.garlic.com/~lynn/subtopic.html#hacmp

THE on USS?

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 9 Oct 2006 06:00:24 -0700
Subject: Re: THE on USS?
Efinnel...@ibm-main.lst wrote:
CMS under MVS has been parked on the shelf next to CLIK(clist compiler) since mid-eighties "for marketing considerations" whatever that means.

spring '82 workshop that included talk on cms under mvs.
https://www.garlic.com/~lynn/96.html#4a

part of the issue was that cms had been personal computing on mainframes since the 60s (on cp67 when it was still called cambridge monitor system ... before "CMS" was renamed to conversational monitor system); this included reasonable interactive response. this was period when trivial interactive response had expectation of quarter second or less with cms .... and anything equivalent under mvs on similar hardware was difficult or impossible to get under one second response.

part of this was MVS use of multi-track search for vtocs and pds directories. for a period, sjr datacenter ... where original relational/sql was implemented (under vm370)
https://www.garlic.com/~lynn/submain.html#systemr

there was vm/cms on 370/158 and mvs on 370/168 and all disk controllers had connections to channels on both processors. there was strict operational guidelines that disks for mvs system weren't allowed to be mounted on "vm" drives (aka related to disk controllers nominally exclusively reserved for vm/cms use)

there were a couple incidents when the operators accidentally violated the guideline and mounted a "mvs" 3330 on a vm/cms string. within 5-10 minutes the datacenter was getting calls from irate cms users about severe degradation in cms response.

the issue was that mvs normal use of disk multi-track search has significant adverse effects on interactive response. users in an MVS environment never experience the with and w/o effects ... so just become accustomed to dismal interactive response. the functional features are only a part of the cms interactive experience vis-a-vis MVS ... because of the other downside of interactive operation in MVS, it hardly justifies offering CMS capability in MVS environment.

misc. other posts about mutli-track search
https://www.garlic.com/~lynn/submain.html#dasd

misc. past posts mentioning ispf
https://www.garlic.com/~lynn/2000d.html#17 Where's all the VMers?
https://www.garlic.com/~lynn/2001m.html#33 XEDIT on MVS
https://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2003o.html#42 misc. dmksnt
https://www.garlic.com/~lynn/2004c.html#26 Moribund TSO/E
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2005j.html#7 TSO replacement?
https://www.garlic.com/~lynn/2005j.html#8 TSO replacement?
https://www.garlic.com/~lynn/2005q.html#15 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005t.html#40 FULIST
https://www.garlic.com/~lynn/2006k.html#50 TSO and more was: PDP-1
https://www.garlic.com/~lynn/2006o.html#21 Source maintenance was Re: SEQUENCE NUMBERS
https://www.garlic.com/~lynn/2006p.html#13 What part of z/OS is the OS?

memory, 360 lcs, 3090 expanded store, etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: memory, 360 lcs, 3090 expanded store, etc
Date: Wed, 11 Oct 2006 12:34:54 -0600
Newsgroup: bit.listserv.vmesa-l
"Schuh, Richard" wrote:
Yeah, but 3090 memory was not ferrite core, was it? IIRC, it was much cheaper and more reliable. I wasn't privy to the bean-counting specifics, but the rumored cost of the LCS storage on our 360 class machines was in the neighborhood or $2.5-3M per 2MB unit. And they were real core - you could look through the glass panels and see the individual planes of wires and doughnuts. The stuff was not reliable, so we had 3 boxes in order to always have 1 available for the production ACP system. There was usually 1 in use, 1 being repaired, and 1 just out of repair that was on standby. The care and feeding of those animals was a career.

small spill-over from bit.listserv.ibm-main mentioning expanded store, 3090 memory, 360 memory, 360 LCS and a couple other memory related topics
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#42 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#43 REAL memory column in SDSF

bandwidth of a swallow (was: Real core)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: bandwidth of a swallow (was: Real core)
Date: Thu, 12 Oct 2006 06:49:06 -0600
Newsgroup: bit.listserv.vmesa-l, alt.folklore.computers
Paul B. Nieman wrote:
In the early 1990's we consolidated a data center from Sydney into Philadelphia. We used SYBACK to do a full dump of specific (most) minidisks to tape and shipped the tapes. We then performed daily incrementals to disk, and sent the incrementals via RSCS, via a 9600 baud line at most. I think we had a 9600 baud line that was shared for RSCS and VTAM traffic, but the telecom part wasn't mine to worry over. Each minidisk intended to move was a separate file and sent via SENDFILE. There were service machines written to send and receive them. I think the first incrementals arrived before the tapes. In any case, we kept track of different day's incrementals for a whole week and applied them as they finished arriving. The line was kept very busy and watched closely, but it was easy to restart if it dropped.

Our actual cutover the following weekend went fairly quickly and met whatever target we had, which I certainly think wasn't enough to allow for backing up, shipping, and applying the tapes.


earlier post in start of this thread:
https://www.garlic.com/~lynn/2006s.html#16 memory, 360 lcs, 3090 expanded store, etc

in the later part of the mid-70s, one of the vm370 based commercial time-sharing services had datacenter on the east coast and put in datacenter on the west coast connected via 56kbit link.

the had enhanced vm370 to support process migration between loosely-coupled machines in the same datacenter cluster ... i.e. for one thing, as they moved to 7x24 worldwide service ... there was no window left for doing preventive maintenance. process migration allowed them to move everything off a complex (that needed to be taken down for maint). the claim was that they could even do process migration over the 56kbit link ... modulo most of the file stuff having been replicated (so that there wasn't a lot needing movement in real time).

misc. past posts mentioning vm time-sharing service
https://www.garlic.com/~lynn/submain.html#timeshare

they had also implemented page mapped filesystem capability with lots of advanced bells and whistles ... similar to the cms page mapped filesystem stuff that i had originally done in the early 70s for cp67.
https://www.garlic.com/~lynn/submain.html#mmap

which also included a superset of the memory segment stuff ... a small subset was later released as DCSS
https://www.garlic.com/~lynn/submain.html#adcon

for other drift ... as mentioned before ... the internal network was larger than the arpanet/internet from just about the beginning until around sometime mid-85. misc. posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the issues was what to do about JES2 nodes on the internal network. one of the issues was that relatively trivial changes in JES2 headers between releases would precipitate JES2 (& MVS) system crashes. for that reason (and quite a few others), JES2 nodes were pretty well limited to a few boundary nodes. A library of vnet/rscs line drivers grew up for JES2 that supported a cannonical JES2 header format ... and the nearest VNET/RSCS node would have the specific line-driver started that would make sure that all JES2 headers sent to the JES2 system ... met the requirements of that specific JES2 version/release. Sporadically, there were still some (infamous) cases where JES2 systems on one side of the world would precipitate JES2 systems on the other side of the world to crash (one particular well known case was JES2 systems in san jose causing JES2/MVS systems in hursley to crash). misc. past posts mentioning hasp &/or jes2
https://www.garlic.com/~lynn/submain.html#hasp

Another scenario was there was some work to do load-balancing offload between STL/bld90 and Hursley around 1980 (since they were offset by a full shift). the test was between two jes2 systems (carefully checked to be at compatible release/version) ... over a double-hop 56kbit satellite link (i.e. up from west coast to satellite over the us, down to the east coast, up to satellite over the atlantic, down to UK). JES2 couldn't make connection ... but all error indicators were clean. So finally it was suggested to try the link between two vnet systems. The link came up and ran with no problem.

The person overseeing the operations was extremely sna/vtam indoctrinated. So the first reaction was what ever caused the problem went away. So it was time to switch it back between JES2 ... it didn't work. Several more switches were made ... always ran between VNET, never ran between JES2. The person overseeing the operation finally declared that the link actually had severe error problems but the primitive VNET drivers weren't seeing them ... and only the advanced VTAM error analysis was realizing there was lots of errors.

it turned out the problem was with the double-hop satellite roundtrip (four hops, 44k miles per hop, 22k-up, 22k-down) propagation delay ... which VNET tolerated and vtam/jes2 didn't (not only was the majority of the internal network not jes2 ... it was also not sna/vtam)

IDC: Virtual machines taking over the world

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 12 Oct 2006 08:33:07 -0700
Subject: IDC: Virtual machines taking over the world
Virtual machine software market grew 67% in 2005: IDC
http://www.crn-india.com/breakingnews/stories/67448.html IDC: Virtual machines taking over the world
http://arstechnica.com/news.ars/post/20061011-7966.html

the new, (40yr) old thing from the cambridge science center, 4th flr, 545 technology sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

some more detailed history from Melinda Varian's paper, VM and the VM Community: Past, Present, and Future, at
http://www.leeandmelindavarian.com/Melinda#VMHist

Very slow booting and running and brain-dead OS's?

Refed: **, - **, - **
From: lynn@garlic.com
Subject: Re: Very slow booting and running and brain-dead OS's?
Date: Thu, 12 Oct 2006 16:18:40 -0700
Newsgroups: alt.folklore.computers
Andrew Swallow wrote:
Multi-core CPUs will bring this sort of knowledge back. We are talking multi-user systems with weak memory protection.

most of the recent chip conferences ... and several more recent press releases are about how application software isn't capable of taking advantage of the multi-core chips .... and that significant changes in (especially application) programming paradigm is required in order to adequately support the levels of parallelization. chips with 2, 4, 8 cores and 80 cores being projected in the not too distant future

a few posts this year mentioning multi-core and/or parallelization
https://www.garlic.com/~lynn/2006.html#14 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#22 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#40 another blast from the past ... VAMPS
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)

real core

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: real core
Date: Fri, 13 Oct 2006 06:07:33 -0600
Newsgroup: bit.listserv.vmesa-l
Tom Duerbush wrote:
So I guess the question I'm wondering...

How many others have shipped dumps, online, back before high speed Internet connections?


re:
https://www.garlic.com/~lynn/2006s.html#16 memory, 360 lcs, 3090 expanded store, etc
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was: real core)

we had done HSDT (high speed data transport) project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

in the 80s ... with high-speed backbone connected to the internal network.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

in the late 80s, the backbone was used to ship chip designs to high-speed hardware logic simulators/checkers located in san jose area. this was claimed to have contributed to helping bring in rios/power chipset a year early. recent post discussinglsm&eve logic simulators (and using HSDT backbone to transport chip design):
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?

we were also interested in participating in nsfnet-1 backbone (which could be considered the operational precurser to the modern internet). we weren't allowed to bid ... but did get an technical review, one of the conclusions was what we had running and operational was at least five years ahead of all the nsfnet-1 bids (RFP responses) to build something new. slightly related
https://www.garlic.com/~lynn/internet.htm#0

and
https://www.garlic.com/~lynn/2002k.html#12 nsfnet program announcement
https://www.garlic.com/~lynn/2000e.html#10 nsfnet award announcement

for other drift, in the early days of rex (rexx), i wanted to demonstrate that rex wasn't just another batch command processor (exec, exec2) but could be used to implement very complex application. I chose the vm problem/dump analyzer ... which was a fairly large application written in assembler. i wanted to demonstrate that in 3 months working half-time, i could implement in rex something that had ten times the function and ten times the performance of the existing assembler implementation. the result was dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

which could be used to analyze a dump interactively ... even over the internal network w/pvm (terminal emulation) ... w/o having to actually ship the dump. part of dumprx was library of automated analysis scripts ... the results could be saved and restored; aka you could run the automated analysis scripts ... that batched the most common sequence of manual analysis processes.

the library of batched analysis routines effectively automated most of the most common (manual) analysis procedures (examined storage for a broad range of failure signatures).

Very slow booting and running and brain-dead OS's?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Very slow booting and running and brain-dead OS's?
Date: Fri, 13 Oct 2006 13:39:10 -0700
Newsgroups: alt.folklore.computers
Tim Bradshaw wrote:
It seems to me that there are two obvious commercially-important uses for these systems.

Firstly, there is a significant market for reasonably large SMP machines already, and that market should love multi-core processors as it reduces their costs (or equivalently makes their machines larger).

Secondly virtualisation is very fashionable right now (I know it's hardly a new idea), and I should think it's an application which can make use of as many cores as you want. I suspect most large datacentres are absolutely stuffed with awful old x86 boxes using up 8u of terribly expensive rack space and running some antique version of Windows / Linux / etc but which can't be turned off because they provide some critical service which no-one knows how to reimplement. Virtualising n of those onto a single n+a-few-core multicore / multichip system ought to be a big win (albeit an admission of defeat in software engineering terms).

I guess the issue with both of these is whether an n-core chip can provide enough bandwidth to off-chip resources to keep all the cores busy. I presume they can, or will be able to.


refs:
https://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006r.html#41 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#7 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#19 Very slow booting and running and brain-dead OS's?

cp67 virtual machines are nearly 40 years old. the initial implementation was cp40 on a 360/40 modified with custom virtual memory hardware ... this morphed into cp67 in 1967 when 360/67 became available .... done at cambridge science center, 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

when lincoln labs discountinued their 360/67 multiprocessor, cambridge upgraded their single processor 360/67 by calling up the moving company and telling them that instead of taking lincoln's machine back to the factory ... instead it was to be moved to the science center.

charlie then did a lot of work on fine-grain locking for cp67 multiprocessing support. out of this work, charlie invented compare&swap instruction (instruction name chosen because CAS are charlie's initials). this was shipped in 370 processors. initially there was a lot of push back claiming that 370 didn't need another multiprocessor specific instruction (test&set carried over from 360 should be good enuf). to get the instruction into 370, additional (non-multiprocessor specific) uses needed to be defined for the instruction. thus was born the descriptions of various uses by multi-threaded application code (whether or not it happened to be running on single processor or multiprocessor). misc. past posts mentioning multiprocessor, compare&swap, etc
https://www.garlic.com/~lynn/subtopic.html#smp

part of the issue is the ever increasing memory latency ... the absolute memory latency hasn't change significantly ... but with the declining bus cycle time, the memory latency measured in number of processor instruction cycles has gone way up.

caches have been used to compensate for memory latency ... as has wider memory bus ... moving more data per memory bus cycle.

out-of-order instruction execution has been used to try and keep execution busy when there are instructions stalled waiting for memory bus. multithreading is also method trying to keep the processor execution units busy .... in the face of instruction stalls waiting on memory

i got somewhat periphrally involved with an early multi-threaded effort involving 370/195 (that never actually shipped). 370/195 was pipelined and optimally peaked around 10mips. however, branches drained the pipelined (no branch prediction, speclutive execution, etc). the frequency of branches in most codes limited 370/195 to around 5mips. the hyperthreading proposal was to create emualted smp machine with dual i-streams, dual registers, etc. There was only standard pipeline and execution units ... instructions and registers in the pipeline would have a one bit tag added, indicating which instruction stream they were associated with. If standard codes and branch frequency resulted in running the execution units at half busy/capacity .... then possible dual instruction stream could keep the units operating at near peak capacity.

with large number of cores, sharing some common cache .... one programming paradigm change implies lots of parallel multi-threaded operation (the type of stuff that was raised in the early compare&swap instruction justification) .... attempting to maximize the aggregate mip rate per number of bytes transferred over the memory bus.

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Why these original FORTRAN quirks?
Date: Sat, 14 Oct 2006 05:28:18 -0700
Newsgroups: alt.folklore.computers
i don't remember any bugs ... although beginning fortran was first programming course some 40 yrs ago ... card decks and compiled/run on 709.

however, person involved in doing the 370/145 apl/cms microcode assist at the palo alto science center was also responsible for the internal "fortran q" enhancements ... eventually released in the product as fortran hx.

misc. past references:
https://www.garlic.com/~lynn/2001g.html#20 Golden Era of Compilers
https://www.garlic.com/~lynn/2002g.html#1 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2004m.html#6 a history question

when we were doing ecps microcode assist originally for 148 (follow-on to 145) ... we had modified the kernel to time-stamp a lot of entry/exits ... which resulted in narrowing initial candidates for kernel machine code migration to microcode
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

he did a psw/instruction-address sampler microcode modification on the 370/145 ... i.e. microcode routine that would wake up periodically and use the current instruction address to index a storage use frequency counter (i.e. divide address by 64, multiple by four, add to base table address to index a full word counter ... and increment the value at that location).

later he also did work on pli optimization.

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Why magnetic drums was/are worse than disks ?
Date: Sat, 14 Oct 2006 06:58:16 -0700
Newsgroups: comp.arch
Stephen Fuld wrote:
Well, one area that disk packs had (at least at first) that drums didn't was interchangability. That is, you could have more disk packs than drives and load the packs as needed, sort of like having more tape reels/cartridges than tape drives. This could give the illusion of more storage on-line than was really available. Of course, as technology progressed, this idea had to give way to increased density.

i think part of 2303/2301 drums were getting all the heads aligned on the surface. 2301 was sort of a 2303 but read/wrote four heads in parallel (and therefor had four times the data transfer rate). 2301 was used on lots of 360/67 cp67 systems for virtual memory paging.

various recent posts mentioning fixed head 2301/2303 drums
https://www.garlic.com/~lynn/2006.html#2 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#41 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006i.html#27 Really BIG disk platters?
https://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives

in the 70s, 2301/2303 was replaced with 2305 fixed head disks .. (that had three times the capacity of the 2301 but about the same transfer rate).

Curiousity: CPU % for COBOL program

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Curiousity: CPU % for COBOL program
Date: Sat, 14 Oct 2006 09:14:55 -0700
Newsgroups: bit.listserv.ibm-main
Larry Bertolini wrote:
I'm looking at this from the "I have never considered..." angle. Times have changed, since your formed that opinion.

Consider two factors: 1. The trend to machines with fewer, faster processors. The typical COBOL program is "single-threaded", and can exploit only one CPU. If you have a 6-way, a single COBOL program is unlikely to use more than 17% of the CPU; if you have a 1-way (presumably 6x faster) it may use >95%, under certain conditions.

2. Faster I/O subsystems. If you have a COBOL program that does VSAM processing, it doesn't use much CPU while it's waiting for disk I/O. But if those waits are very short, the program will run faster, and consume more CPU seconds per wall-clock minute.


although the next stage might be what the risc/ciscs chips are running into ... going to multi-cores per chip.

a few years ago, i had to look at an extremely large (several hundred thousand statement) cobol applciation that represented an extremely significant corporate workload (that ran on large number of mainframe systems)

in the early to mid-70s at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

we commoningly used three techniques

• hot-spot sampling
• modeling
multiple regression analysis

and found that sometimes one technique would identify things that the other two techniques couldn't

an example of hot-spot sampling is mentioned in this recent post about determining what part of kernel machine language should be dropped into microcode (part of a longer thread that also discusses needing programming paradigm changes to support parallel operation)
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's

the modeling work evolved into a performancing modeling tool (performance predictor) available on HONE (world-wide support system for marketing, sales, and field people)
https://www.garlic.com/~lynn/subtopic.html#hone

that sales people could use to ask what-if questions about customer workload and configuraiton (i.e. what happens if the worload changes or what happens if more real storage is added, etc). much of this work was also basis for capacity planning ... for some drift.
https://www.garlic.com/~lynn/submain.html#bench

in the mid-90s the rights to a much evolved descendent of this modeling tool was acquired by somebody in europe ... who ran it thru an APL->C language converter and was using it in mainframe consulting businesses at high-end

the particular large cobol application had been extensively analyzed with hot-spot analysis and this later generation performance modeling tool with some amount of success finding areas that could be recoded/restructure for improving thruput. however, it was felt that there must be a lot of opportunity still left (the application had evolved over a couple decades and there was a feeling that it was now significantly slower than it was when it first started).

i suggested that multi-regression analysis be used on internal application activity counters. the result was that it identified some operation that accounted for something like 20 percent of total activity (but invoked executable code in very convoluted way that wasn't easily identifiable using the other two mechanisms). with that information, it turned out it was fairly straight-forward to restructure that particular operation and gain another 14percent improvement (running across a fairly large number of max'ed out mainframe processor configurations)

however, there has been the issue that many of these large serial batch operations are going to have to be reworked for much higher degree of parallel operation.

misc. past posts mentioning multiple regression analysis technique
https://www.garlic.com/~lynn/2002l.html#62 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2006f.html#22 A very basic question
https://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents

VM SPOOL question

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VM SPOOL question
Date: Sun, 15 Oct 2006 06:44:14 -0600
Newsgroups: bit.listserv.vmesa-l, alt.folklore.computers
originally cp67 had everything in the kernel, booted and fixed in real memory, there were a pre-allocated fix number of save areas for kernel linkage (making it slightly easier to identify everything going on from a dump ... so all saveareas were in contiguous area of storage), and total kernel dynamically allocated working storage (all the kernel control blocks, virtual i/o structure, virtual memory, working storage, etc) could increase by making calls to paging supervisor

as undergraduate, i did a lot of work on dynamic adaptive scheduling and graceful degradation with increasing work load.

increasing work load ... could exhaust the pre-allocate kernel linkage save areas ... so i had to make change to allow for kernel linkage save areas to dynamically increase (decrease) in much the same way the rest of kernel dynamically allocated storage could increase/decrease. this became part of the distributed cp67 kernel

there was a lot of stuff in the cp67 kernel that was relatively low usage ... like various commands, and with increasing work load, there was lot more pressure on available real storage ... a 786k 360/67, 64 pages per 256k ... might only have 110-120 pages after fixed kernel requirements; being able to "page-out" low-usage kernel components might pick up 15-20percent real storage for workload. this didn't ship in the standard kernel until vm370.

there was an issue with the free storage manager doing a generalized call to the paging supervisor for additional working storage. the paging supervisor could sometimes select a modified virtual memory page for replacement ... this would require scheduling a page i/o write for the page and waiting for the i/o to complete before making it available for kernel use (during that period the system would sometimes crash because of lockup and exhausted working storage). i created a special page replacement interface for the kernel storage manager ... that would look for any non-changed/non-modified page for replacement (eliminating any delay in extending kernel working storage by scavenging paging memory). this went out in standard cp67 product.

for a little drift ... also as an undergraduate i completely redid the page replacement algorithm ... implementing global LRU based on page reference bits ... lots of past postings discussing page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock

lots of these factors contribute to what appears to be amount of real memory that is associated with the kernel ... and therefor the amount of spool space that may be required for containing a image "dump" of all the associated kernel real storage.

for other topic drift ... lots of past postings mentioning problem failure/diagnostic
https://www.garlic.com/~lynn/submain.html#dumprx

much, much later ... i had an issue with the aggregate thruput that a single virtual machine could get out of the spool file interface. part of the issue was that spool buffer transfers were synchronous ... during which time the virtual machine didn't process.

recent posts in this n.g.
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow
https://www.garlic.com/~lynn/2006s.html#20 real core

mentioning HSDT effort. part of this was vnet/rscs driving multiple HYPERchannel links ... some channel speed links between machines in the same machine room (couple mbytes/sec) and a number of full duplex T1 links (about @300kbytes/sec aggregate per link) ... creating a requirement for several mbytes/sec aggregate thruput.

the synchronous nature of the spool interface limited vnet/rscs to something like 5-10 4k spool buffers/sec (depending on other load for the spool system) ... or 20kbytes/sec under most loads to possibly max of 40kbytes/sec. HSDT needed aggregate spool thruput closer to 100 times that.

so my effort was to move most of the spool function into a virtual address space, completely recode it in vs/pascal, and provide significant additional asynchronous behavior and other thruput enhancements (in order to increase aggregate thruput by two orders of magnitude) . recent post discussing the changes
https://www.garlic.com/~lynn/2006s.html#7 Very slow booting and running and brain-dead OS's

part of the implementation running spool function in virtual address space was taking advantage of the page map interface that i had done originally in the early 70s for cp67 cms page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

also part of HSDT backbone was doing rfc1044 support (which shipped in the product) for the mainframe tcp/ip implementation (also originally written in vs/pascal). the base tcp/ip support would consume nearly a full 3090 processor getting aggregate 44kbytes/sec thruput. in some tuning/testing at cray research, between 4341 clone and cray ... rfc1044 was getting sustained channel interface thruput (1mbyte/sec) using only modest amount of the 4341 clone processor.
https://www.garlic.com/~lynn/subnetwork.html#1044

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Mon, 16 Oct 2006 06:19:13 -0600
jmfbahciv writes:
Yes.

I loved when computed GOTOs showed up. The logic flow became so easy. Programming based on two-branch, yes/no, flow charts could get very awkward.


related comments in another thread:
https://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?

360 assembler could have condition setting instruction followed by one or more branches. 360 condition code is two bits ... so could have four possible values. branch condition instructions used four bits ... so could setup to take branch on one or more possible condition code values (or "no-op" instruction)

from my q&d conversion of green card ios3270 to html
https://www.garlic.com/~lynn/gcard.html#2 Extended Branch Mnemonics

... assembler instruction branch mnemonics that generate specific branch "mask" conditions.

and condition code settings mapped to conditional branch mask
https://www.garlic.com/~lynn/gcard.html#8 Condition Codes Settings

i.e.


CC     branch mask
  --     -----------
00        1000
01        0100
10        0010
  11        0001

for some instructions ... it would be possible to program for four different code paths taken based on the instruction condition code setting (as opposed to simple two-way).

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Mon, 16 Oct 2006 06:42:08 -0600
jsavard writes:
There *was* another reason for a three-way branch, though, besides having a CAS instruction.

If one assumes the programmer will ensure the expression in the IF statement doesn't overflow, a three way branch covers all the cases with one form of statement. There is no need to define a family of operators - .EQ. .LT. .NE. .GT. .LE. and .GE. - to cover six different comparison tests.


post about four possible code paths
https://www.garlic.com/~lynn/2006s.html#26 Why these original FORTRAN quirks?

for if/then/else ... one wonders if it was those 2-value, true/false logic/philosophy classes, truth tables, etc ...

for some drift ... 3-value logic posts
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2003g.html#41 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004l.html#75 NULL
https://www.garlic.com/~lynn/2005.html#15 Amusing acronym
https://www.garlic.com/~lynn/2005i.html#35 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005m.html#19 Implementation of boolean types
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005t.html#20 So what's null then if it's not nothing?
https://www.garlic.com/~lynn/2005t.html#23 So what's null then if it's not nothing?
https://www.garlic.com/~lynn/2005t.html#33 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006e.html#34 CJ Date on Missing Information
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#23 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#29 3 value logic. Why is SQL so special?

Storage Philosophy Question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Newsgroups: bit.listserv.ibm-main
Subject: Re: Storage Philosophy Question
Date: Mon, 16 Oct 2006 09:36:21 -0600
R.Skorupka@ibm-main.lst (R.S.) writes:
1. DASD mirroring does not prevent you against errors in data. Errors made by human, software bug, etc. 2. Campus area seems to be too small to talk about serious DR centre. Too short distance. Numerous disaster types could spread both locations. 3. There is no real protection (with excpetions to NORAD etc.) against terrorist attacks. They can attack two locations at the same time, the distance is irrelevant.

the early 80s ... there were some studies that chose 40 miles as a minimum number ... although you still have to look at common failure modes ... like 40 miles along the same river (flood plain) that would flood both locations. some places have extra redundancy with more than single replication.

when we were doing ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

we coined the terms disaster survivability and geographic survivability
https://www.garlic.com/~lynn/submain.html#available

we talked to a number of operations about their experiences.

one operation had carefully chosen a datacenter metropolitan bldg for (physical) diverse routes ... two different water mains on opposite sides of the bldgs, two power feeds from different power substations on opposite sides of the bldg., four telco feeds entering from four physical sides, from four different central offices.

their datacenter went down when they had a transformer blow and the bldg. had to be evacuated because of PCB contaimination.

some of this gets into other kinds of threat models ... misc. postings mentioning vulnerabilities, threats, exploits, fraud, etc
https://www.garlic.com/~lynn/subintegrity.html#fraud

and/or assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

Why these original FORTRAN quirks?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Mon, 16 Oct 2006 17:18:28 -0600
Peter Flass <Peter_Flass@Yahoo.com> writes:
I used this a few times, but I try to make sure the code is bracketed with "keep together" lines and lots of comments. It seems a little error-prone to me.

re:
https://www.garlic.com/~lynn/2006s.html#26 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006s.html#27 Why these original FORTRAN quirks?

take for example sio/siof (start i/o) instruction
https://www.garlic.com/~lynn/gcard.html#8.4 Condition Codes - Input/Output Instructions

from above


SIO, SIOF         Successful  CSW stored    Busy          Not operational
                    00          01           10               11
1000        0100         0010             0001

... something like

SIO
BC        8,success            wait for completion
BC        4,cswstored          analyze status
             BC        2,busy               requeue and wait to redrive
BC        1,notthere           error, not there
•            can't get here since all possible conditions have been used

...

note that busy/10 was typically subchannel busy. however, in cswstored/01 you could also get a BUSY status.
https://www.garlic.com/~lynn/gcard.html#6 Channel Status Word

from above ... SM+BUSY would indicate control unit busy, BUSY by itself would indicate device busy. However, with the introduction of disk string-switch, it was possible that the string-switch could be busy ... so they had to make BUSY (by itself) serve dual purpose for both device busy and string-switch busy.

then there was a condition called "short control unit busy" ... i.e SM+BUSY+CUE ... on initial selection, the control unit was busy, but before the handshaking had completed, the control unit became free. the standard process for short control unit busy was to immediately redrive the operation.

however, when i was trying to drop bullet proof i/o supervisor into the disk engineering and product test labs
https://www.garlic.com/~lynn/subtopic.html#disk

... one of the issues was flakey control unit that always presented SM+BUSY+CUE ... and unless you were prepared ... things could get in an endless loop. sometimes the code might skip the "BC 4,cswstored" instruction and let execution fall thru into the initial csw stored analysis
SIO BC 8,success wait for completion BC 2,busy requeue and wait to redrive BC 1,notthere error, not there • falls thru into cswstored analysis

...

"not-there" could be misconfiguration. however, it could also be operators playing around with multi-channel switches on things like tape drives ... resetting switches so that a tape drive would get manually reconfigured to a different processor complex.

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch
Date: Tue, 17 Oct 2006 05:37:34 -0600
"Del Cecchi" <delcecchiofthenorth@gmail.com> writes:
When I wrote "two heads per platter", I wasn't trying to suggest that the heads were on the same side of the platter. I was referring to one head on either side of the platter, where both sides of a platter are covered with magnetic film.

re:
https://www.garlic.com/~lynn/2006s.html#23 Why magnetic drums was/are worse than disks ?

360 2303/2301 were nearly identical drums ... with 2301 read/writing four heads in parallel (with four times the transfer rate of 2303)

the 370 replacement for 2303/2301 was the 2305 fixed-head disk. there were two models ... the larger capacity had about the same transfer rate as the 2301 and three times the capacity. there was another model with half the capacity ... same number of heads but half the rotational delay ... i.e. instead of one head per track, there were two heads per track offset by 180 degrees (but only one head read/write at a time).

a lot of the 2305s were used for virtual memory paging ... and operating system optimized for doing full-track transfers ... as a result, the larger capacity was more important than cutting the avg. rotational delay in half.

later there were some number of emulated 2305s (referred to as 1655) made from memory chips that had failed some number of standard memory tests. recent post discussing 1655s
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

and from not so long ago and far away ... early variation on attempting to get higher recording densitites and/or transfer rates. in the following, servo feedback tracking controls per arm (head) would have fine-grain servo positioning of the active head over the platter being read/written.

there had been some discussion of doing parallel read/writes of all heads on a disk ... but as physical dimensions became smaller ... it would require that different heads on different surfaces have simulataneous, independent servo feedback tracking (difficult if they are all on the same physical structure).

Date: Sun, 22 Nov 1987 09:44:56 PST
From: wheeler
Subject: DASD operational characteristics;

Recent article quotes IBM Almaden as saying that they have come up method for increasing DASD density by a factor of 50. Several years ago, DASD engineers were sure that they could go to a factor of 100 over 3380. On 3380, the inter-track gap was 20 track widths ... with servo feedback tracking controls on the arm ... 3380s could go to single track-width gaps (increase by factor of 10). Vertical recording techniques can give a bit density of another factor of 10 over current technologies. This accounts for a factor of 100 over 3380s. Since that time, "double-density" 3380Es have cut the inter-track gap to 10 track-widths.

In my CDROM review, there was at least two or three japanese companies that are offering vertical recording on which they place CDROM-type capacities on 5in floppies (i.e. multi-hundred mbytes).

The increase in DASD capacity by 100 (or 50) has a severe impact on arm loading, access. One way of characterizing data is calculate the number of times there is some access to that data per second. That calculation can be prorated per megabyte. The resulting figure gives a number that says

for "such & such" a configuration/load or "such & such" a number of accessing programs/users ... specific data has an access rate of "N read/writes per second per megabyte".

Such a data profile characterization usually indicates for many classes of data, it is not possible to put more than a hundred megabytes of data on a DASD (regardless of the available space) w/o saturating arm access capacity and degrading performance.

Vertical recording has a couple implications that lead towards requiring full-track I/O with full-track buffers co-located with the drive. Given that the drive is spinning at the same speed, then there is 10* as much data passing under the heads in the same unit time ... i.e. the data transfer rate between head and disk increases by a factor of 10. If a track currently contains 500kbytes and transfers data at 20mbit/sec, vertical recording will make that 5mbytes/track and 200mbit/sec transfer rate.


... snip ... top of post, old email index

and another proposal for significantly optimizing inter-track gaps and servo-tracks per data tracks ... however this has the same physical head (and servo feedback mechanism) read/write 16 tracks in parallel.

Date: Wed, 30 Dec 1987 07:46:14 PST
From: wheeler

xxxxxxx wants me to go to ykt with him week after next. He wants to talk to yyyyyy about doing a disk head with 16+2 r/w surfaces. There would be two r/w surfaces on the outer edges to handle servo track information with 16 data r/w surfaces between them. Data would be transferred in parallel over the 16 r/w data areas. Head would format a surface by writing servo tracks and 16 data tracks at the same time, then move to the next track grouping. Surface would have one servo track for every 16 data tracks (with head nominally overlapping two servo tracks). xxxxxx talks about gigabit transfer rate and possibly gigabyte storage on 5in surface.


... snip ... top of post, old email index

some number of posts mentioning working with the disk engineers in bldg. 14
https://www.garlic.com/~lynn/subtopic.html#disk

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch
Date: Tue, 17 Oct 2006 05:56:35 -0600
"robertwessel2@yahoo.com" <robertwessel2@yahoo.com> writes:
Certain models of IBM 3340 did the same thing.

3330s had 20 surfaces and 20 read/write heads on the arm ... but only 19 "data" heads. the 20th surface was used for encoding positional information. compared to the 360 2311/2314 disks, the "370" 3330s introduced a new command structure called rotational positional sensing (RPS).

normal dasd/disk i/o command sequence positioned the arm and then reserved tied-up channel, controller and disk "searching" for the desired record to be read/written. the new RPS sequence allowed for careful programming to approximately positioning the disk rotation before starting the record search operation.

this was left over from the original 360 trade-off ... where (excess) i/o capacity was traded off against very scarce real memory resource. i/o command programming and all arguments for i/o command programming were stored in processor memory and (constantly) accessed by channel/controller/device. most disk and filesystem format structures were kept on disk ... as opposed to be cached in processor storage. i/o command programming could specify search argument for some record identification (like member of program library) and turn the disk loose to search for it. recent post about multi-track search operations
https://www.garlic.com/~lynn/2006s.html#15 THE on USS?

lots of past posts mentioning DASD operational characteristics and multi-track search trade-off characteristics
https://www.garlic.com/~lynn/submain.html#dasd

i've done a q&d conversion of old green card ios3270 file to html
https://www.garlic.com/~lynn/gcard.html


https://www.garlic.com/~lynn/gcard.html#12 channel command word
https://www.garlic.com/~lynn/gcard.html#26 DASD command codes
https://www.garlic.com/~lynn/gcard.html#26.1 CKD command codes
https://www.garlic.com/~lynn/gcard.html#26.4 RPS sector formulae

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch
Date: Wed, 18 Oct 2006 07:04:56 -0600
Benny Amorsen <benny+usenet@amorsen.dk> writes:
What about simply having two entirely independent arms, both capable of reaching all tracks? That could potentially cut rotational latency in half. Yay for a virtual 30000rpm disk...

it was called 2305-1 ... fixed head disk with two pairs of heads per track, offset by 180degrees ... it had the same number of heads as the 2305-2 ... but only half the data capacity.

picture of 2301 drum
http://www.columbia.edu/cu/computinghistory/drum.html

picture of 2305 fixed head disk (announced 28jan70, withdrawn 30jan80):
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

note in the above, it mentions that 2305-1 has half the capacity and half the (rotational delay) access time of 2.5milliseconds (compared to 2305-2) it also says that it has twice the data transfer rate (3mbytes/sec) implying that it was transfering data from both heads at once. however, i don't know any 370 channels that ran at 3mbyte/secs. My understanding was that 2305-1 just transferred data from whichever head saw the record first.

the later 3380 disks and 3880 disk controller supported 3mbyte/sec "data streaming" transfer ... and there was a really ugly "speed matching buffer" project for 3880 disk controller to allow attachment to 168 (2880 blockmux) channel (running at 1.5mbytes/sec). recent post discussing 3mbyte/sec "data streaming"
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed

the 360/370 channels were limited to 200ft aggregate distance and did synchronous one byte transfer per channel handshake. going to 3mbyte/sec "data streaming" (and up to 400ft aggregate channel distance) they relaxed the end-to-end handshake on every byte.

some other drift ... i'm looking for a couple more code names:


??         2301       fixed-head/track (2303 but 4 r/w heads at time)
??         2303       fixed-head/track r/w 1-head (1/4th rate of 2301)
Corinth    2305-1     fixed-head/track
Zeus       2305-2     fixed-head/track
??         2311
??         2314
??         2321       data-cell "washing machine"
Piccolo    3310       FBA
Merlin     3330-1
Iceberg    3330-11
Winchester 3340-35
??         3340-70
??         3344       (3350 physical drive simulating multiple 3340s)
Madrid     3350
NFP        3370       FBA
Florence   3375       3370 supporting CKD
Coronado   3380 A04, AA4, B04
EvergreenD 3380 AD4, BD4
EvergreenE 3380 AE4, BE4
??         3830       disk controller, horizontal microcode engine
Cybernet   3850       MSS (also Comanche & Oak)
Cutter     3880       disk controller, jib-prime (vertical) mcode engine
Ironwood   3880-11    (4kbyte/page block 8mbyte cache)
Sheriff    3880-13    (full track 8mbyte cache)
Sahara     3880-21    (larger cache for "11")
??         3880-23    (larger cache for "13")

recent posts mentioning (370) 2305 fixed head disks (replacement for the 360 2301/2303 drums)
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2006i.html#27 Really BIG disk platters?
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
https://www.garlic.com/~lynn/2006m.html#5 Track capacity?
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006s.html#23 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch
Date: Wed, 18 Oct 2006 08:45:56 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
note in the above, it mentions that 2305-1 has half the capacity and half the (rotational delay) access time of 2.5milliseconds (compared to 2305-2) it also says that it has twice the data transfer rate (3mbytes/sec) implying that it was transfering data from both heads at once. however, i don't know any 370 channels that ran at 3mbyte/secs. My understanding was that 2305-1 just transferred data from whichever head saw the record first.

re:
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?

however, this journal article from 1972: "Channel and direct access device architecture"
http://www.research.ibm.com/journal/sj/113/ibmsj1103B.pdf

states that the 2880 channel was capable of operating at 3mbyte/sec (but it wasn't "data streaming" later used for 3380/3880s ... so I suspect that the maximum cable distance for 3mbyte/sec operation was significantly reduced) ... so maybe the 2305-1 was reading/writing from both heads simulataneously (to get 3mbyte/sec)???

you've got two heads off-set 180degrees for the same data. rotational latency is cut in half (compared to 2305-2), not by spinning twice as fast ... but using both heads to scan for the same record ... and the first record that encounters the data, starts the transfer.

the 2301 (drum) got four times the data transfer rate (compared to the 2303) by read/writing four heads in parallel (however, the record data was synchronized on four tracks in parallel ... so the rotational latency on the 2301 was the same as 2303, as was the capacity).

if you arrainged pairs of bytes so that they were on opposing sides of the same track ... then dynamically switch head-pairs between which was the odd-byte head and which was the even-byte head ... i.e. half the track would be odd-bytes and the other half of the track would be even-bytes ... then either head could encounter the start of odd bytes while the opposing head could encounter the start of the even bytes.

Basic Question

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Basic Question
Newsgroups: talk.politics.crypto
Date: Wed, 18 Oct 2006 11:36:04 -0600
Hagen Ladwig <hal22@t-online.de> writes:
In short, if you use digital signatures for example, they don't confirm 100% that you signed the document, but with an error possibility of 10^-30 (I don't know to what parameters and system this would correspond, it is just a very, very small number).

nominally ... signing a document (as in human signature) implies that you have read, understood, agreed, approved and/or authorize.

i've commented before (we had been called in to help word-smith the cal. electronic signature legislation ... and later the federal electronic signature legislation)
https://www.garlic.com/~lynn/subpubkey.html#signature

that there appears to be some semantic confusion with the word "signature" occurring in both the term "digital signature" and the term "human signature".

nominally digital signature is used to indicate 1) whether something has been modified and 2) (authenticates) where something originated

a secure hash of something is taken and then encoded with a private key ... resulting in a "digital signature"

the relying-party/recipient then takes the "something", recomputes the secure hash, decodes the "digital signature" (with public key corresponding to private key using in the original encoding) and compares the two secure hashes. if they are the same, then the relying-party/recipient can assume:

1) the "something" hasn't been modified since the "digital signature" 2) it originated from the party associated with the public/private key pair

this can be assumed to be something you have authentication ... the originating party has (unique) access and/or use of the associated "private key" (i.e. some specific person is in physical possession of a unique hardware token containing the only copy of a private key used for the secure hash encoding).

however, nothing in this process carries the implication that the originating party has read, understood, agrees, approves, and/or authorizes the contents ... just where the contents has originated.

this confusion has given rise to things like possible dual-use attack scenario.

there are some number of authentication protocols ... where a server generates and transmits a random number. the client digitally signs the random number and returns the signature ... for something you have authentication (random number used as countermeasure to replay attacks). there is no presumption that the person associated with the client is even cognizant of the random number value.

now if the same public/private key pair can be also used in infrastructures that assume some equivalence to "human signature" (but failing to provide any additional assurance as to human intent) .... then you could potentially have an attacker compromising some server ... and transmitting what appears to be a valid transaction/contract in lieu of a random number (as part of an authentication protocol). the digital signature is then automatically generated w/o any implication of a human have read, understood, agrees, approves, and/or authorizes.

the countermeasure to dual-use attack ... is to always require that there is additional indication as to whether there is any implication of human signature, human intent, and/or read, understood, agrees, approves, and/or authorizes.

another possible countermeasure ... is to NEVER use a private key in any sort of digital signature based authentication protocol, where something might be digitally signed that hadn't been read, understood, agreed, approved, and/or authorized.

this strays into other assurance areas
https://www.garlic.com/~lynn/subintegrity.html#assurance

like the EU FINREAD terminal standard
https://www.garlic.com/~lynn/subintegrity.html#finread

where you have an independent, certified hardware token interface. the high-assurance, certified terminal is provided with both a pin-entry interface and display as countermeasure to

1) keyloggers that can capture hardware token pin-entry that can be used by compromised software to generate operations for the hardware token to digitally sign ... w/o the associated human's knowledge

2) compromised software that displays one thing but provides the hardware token with something completely different for actual digital signature (display a transaction for $5 but the actual digital signature is on a transaction of $500).

the basic EU FINREAD terminal standard provided some protection to the user against compromised software on their personal PC. However, there is nothing in the basic standard that provides proof to a relying party that such a terminal was used for the operation.

for the x9.59 financial standard, the x9a10 working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

the x9.59 financial standard allows for a (hardware token) digital signature (for something you have authentication) as well as a certified terminal digital signature.

the digital signature by the certified terminal, isn't so much for authentication ... but as evidence (to a relying party) that certain processes had been followed that provides some basis for inferring human intent as basis for human signature (i.e. read, understood, agrees, approves, and/or authorizes).

the "digital signature" by the hardware token is simple something you have authentication ... but does not imply "human signature". it is the digital signature by the certified terminal that inplies a process that be used to imfer human signature.

a couple recent postings dealing with digital signatures for being able to attest to something
https://www.garlic.com/~lynn/aadsm25.htm#35 signing all outbound email
https://www.garlic.com/~lynn/aadsm25.htm#44 TPM & disk crypto

and various past posts mentioning dual-use attack on digital signature operations:
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm21.htm#5 Is there any future for smartcards?
https://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the security challenges
https://www.garlic.com/~lynn/aadsm23.htm#13 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Wed, 18 Oct 2006 12:00:53 -0600
scott writes:
I am looking for a copy of Turbo C 1.5 from 1987 for some historical research I'm doing into computing from that time period.

Turbo C 1.0 and 1.5 were different from 2.0 -- by the time 2.0 came out, Borland was switching to its "classic" blue window that was seen from that point onward. Version 1 had the black background in Turbo Basic 1.1 and Turbo Prolog 2, which were contemporaries. By the time Turbo C 2 came out, the black background products were gone - the main Turbo C and Turbo Pascal got the "classic" look in their next releases. For historical accuracy in screen grabs and something I'm writing, I would like to be able to do screen captures of 1.5. I have been unable to find it at the Borland museum, Vetusware.com, eMule, etc although almost all other versions are available; I haven't even seen it for sale. (Note: The museum has Turbo *C++* 1.0, which is OFTEN mislabeled as "Turbo C 1.0". The Turbo C labelled as 1.0 at Vetusware.com is actually 2.0.) The only hope I have is if someone has a copy...

I am also not sure if 1.5 implements the then-draft ANSI standard. I am eager to look into the header files and see if it does. 2.0, which was released in 1988, seems to adhere to what was still (I think) a draft standard at the time it came out. The 1.5 release may still have been a K&R compiler. (The 1.0 release is said to be extremely buggy - 1.5 was a release that apparently was more a bug fix than anything else.)


i've been looking for a way to read the 100 or so 5.25in. diskettes that I still have. I've got four (personal) diskettes that are say they are backup versions of various versions of turbo pascal. I've also got turbo pascal installation diskettes

v2 ... copyright 1983 v3.01a ... copyright 1983 no version number, but says copyright 1987 v5.5 ... copyright 1987, 1989

i've got three different sets of turbo c installation diskettes (but they don't carry version numbers).

four diskettes copyright 1986 (black label with pink? corner symbol) ide, command line/utilities, header files/libraries/examples, libraries/examples

five diskettes copyright 1986 (black label with blue corner symbol) ide, command line/utilities, header files/libraries, libraries, examples

six diskettes copyright 1987, 1988 (yellow label)

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Wed, 18 Oct 2006 13:45:04 -0600
Al Kossow <aek@spies.com> writes:
I would suggest using Imagedisk
http://www.classiccmp.org/dunfield/img/

If you need a drive, I should be able to loan one to you, or if they are of historical interest, I could read the floppies for you at the Museum.


re:
https://www.garlic.com/~lynn/2006s.html#35 Turbo C 1.5 (1987)

the installation diskettes i could donate ... but would still like to have access to a drive ... to try and read the rest of the other diskettes ... i've got several that say dos 1.86 (released internally with lots of bells and whistles) ... which look to be dual-sided 40trk diskettes ... but i believe formatted with 80trks/side and possibly 10 sectors per track (vanilla original density was 9sectors per track).

i believe the AT high-density 5.25in. diskette readers could be used to read normal density 40trk diskettes (as long as you didn't try writing) ... and with the right software also read normal density dual-sided 40trk diskettes formated w/80trks. I once had an early (employee purchase plan) pc with a pair of standard diskettes ... which I eventually added a pair of half-height teac external drives ... that were capable of normal density 80trk operation.

normal density, single-side, 40trk 8sectors/track were 160kbytes. normal density single-side 40trk, 9sectors/track was 180kbyte, dual-sided 40trk, 9sectors/track was 360kbyte. the normal density, dual-sided 80trk, 10sectors/track were 800kbyte ... while the (PC/AT) high-density, dual-sided 80trk were 1.2mbyte.

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Wed, 18 Oct 2006 15:36:24 -0600
scott writes:
Also, for people doing research along this line, the Turbo C++ 1.0 or 1.01 image at the Borland Museum has circulated on the Internet through web pages and file sharing and is often mislabeled as Turbo C 1.0. Maybe people don't know the difference, but you will frequently see the C++ compiler mislabeled as Turbo C 1.0. The two are very different compilers released about 4-5 years apart.

i also have a couple different verions of turbo C++ installation diskettes (these are 3.5in ) ... which are different from the turbo C (5.25in) installation diskettes.

re:
https://www.garlic.com/~lynn/2006s.html#35 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#36 Turbo C 1.5 (1987)

Design life of S/360 components?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Design life of S/360 components?
Newsgroups: alt.folklore.computers
Date: Wed, 18 Oct 2006 15:45:24 -0600
"Ancient_Hacker" <grg2@comcast.net> writes:
Not that critical a consideration in those days. Labor was cheap, computers were expensive. Plus a IBM engineer had to show up once a week to oil up the card reader and clean the tape drives.

my first student programming job was to do a 360 assembler port of 1401 MPIO program. The university had 709 with a 1401 handling unit record .... i.e. card-to-tape and tape-to-printer/punch.

as part of transition, the got a 360/30 replacement for the 1401. they could run the 360/30 in 1401 hardware emulation mode (performing unit record front end for the 709). my job was to implement the 709 unit record front end functions in 360 assembler.

i got to design and implement my own monitor, interrupt handler, device drivers, storage allocation, etc. normal university operation was to shutdown the datacenter 8am on saturday and not resume until 8am monday. this met that i normally could have the whole datacenter to myself from 8am saturday until business resumed on 8am monday (48hr shift and then quick break/shower before going to monday classes).

I fairly quickly learned that before starting anything else, to first clean the tape heads (something the operators would normally do at the start of each shift) ... and also take the 2540 reader/punch apart and clean it.

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Wed, 18 Oct 2006 16:53:50 -0600
Peter Flass <Peter_Flass@Yahoo.com> writes:
Now I'm confused. First I agreed with you, on second thought I realized HASP probably translated the control characters, on third thought, I remembered we used ASA control characters with DOS and no spooler, so the 1403 and follow-ons must have processed the ASA control characters in hardware.

from gcard ios3270 converted to html
https://www.garlic.com/~lynn/gcard.html


https://www.garlic.com/~lynn/gcard.html#9 ANSI control characters


Code   Action before printing                      Code  Stacker
blank  Space 1 line                                 V       1
0      Space 2 line                                 W       2
-      Space 3 line                                 X       3
+      Suppress space                               Y       4
1      Skip to channel 1 (line 1 on new page)       Z       5
2      Skip to channel 2
3      Skip to channel 3
4      Skip to channel 4
5      Skip to channel 5
6      Skip to channel 6
7      Skip to channel 7
8      Skip to channel 8
9      Skip to channel 9
A      Skip to channel 10
B      Skip to channel 11
C      Skip to channel 12

... snip ...

i.e. control characters were the first byte of the data.

if the first byte of the data was indicated as control character, then the printer driver would generate a ccw that did the appropriate control function ... followed by a ccw that wrote the rest of the data to the printer.

and printer (ccw) command codes
https://www.garlic.com/~lynn/gcard.html#24 Printer Control Characters


Action            After Write   Immediate
Space 0 Lines         01          01  (sometimes called write without spacing)
Space 1 Line          09          0B
Space 2 Lines         11          13
Space 3 Lines         19          1B
Skip to Channel 0     -           83  (3211 and 3203-4 only)
Skip to Channel 1     89          8B
Skip to Channel 2     91          93
Skip to Channel 3     99          9B
Skip to Channel 4     A1          A3
Skip to Channel 5     A9          AB
Skip to Channel 6     B1          B3
Skip to Channel 7     B9          BB
Skip to Channel 8     C1          C3
Skip to Channel 9     C9          CB
Skip to Channel 10    D1          D3
Skip to Channel 11    D9          DB
Skip to Channel 12    E1          E3

... snip ...

i.e. the control characters in the first byte were then directly as the CCW command code.

the printer CCW command code performed the indicated control operation after writing the associated line ... while the ANSI control characters indicated the operation to be performed before writing the associated line.

the brain dead printer driver would convert ANSI control into a pair of CCWs, first was a CCW that performed just the immediate control operation chained to a CCW that did write immediate (w/o any motion).

fancier printer driver ... would take the ANSI control operation indicated by the following line ... and use it to modify the CCW for the previous line.

say ANSI had three lines with a blank in the first byte followed by a a line with "1" in the first byte (skip to start of page). the brain dead operation would be CCW channel program:


0B    space line
01    write data, no space
   0B    space line
01    write data, no space
   0B    space line
01    write data, no space
8B    skip to channel 1 (start of page)
01    write data, no space

so a little fancier would be

0B    space line
   09    write data, space line
09    write data, space line
   89    write data, skip to channel 1 (start of page)
01    write data, no space

........

so standard os/360 used DCB/DD RECFM parameter to indicate whether the first byte was ANSI printer control or machine printer control.

various combinations of RECFM= paramemter:


F       FA      FB      FBA
FBM     FBS     FBSA    FBSM
FM      FS      FSA     FSM
V       VA      VB      VBA
VBM     VM      U       UA
UM

where

F - Fixed length records.
V - Variable length records.
U - Undefined length records.
B - Blocked records.
S - Standard blocks.
A - ANSI printer control characters are in the first byte of each record.
M - Machine printer control characters are in the first byte of each record.

... snip ...

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Thu, 19 Oct 2006 10:26:08 -0600
hancock4 writes:
*Back then machines were so damn expensive we stretched them to the limit. We had a dual 158 with 6 Meg and we probably could've used a 168 with 8 or 10 meg instead. Programmers were encouraged to work odd hours to compile and test when the machine was less busy. Capacity control was a big issue back then. We upgraded (3031?) and had more power.

158 had integrated channels ... the microcode engine was shared between the microcode implementing 370 and the microcode implementing channel (i/o).

for the 303x series ... they packaged a 158 microcode engine w/o the 370 microcode (just the integrated channels) as a "channel director". then all the 303x machines got external channel director boxes ...

the 3031 was a 158 microcode engine with just the 370 microcode and a separate 158 microcode engine as a "channel director" (running the integrated channel microcode). the 3031 was executing 370 faster than 158 ... since the engine was dedicated to 370 and not shared with also doing the channel function.

the 3032 was 168-3 repackaged to use "channel director" instead of 2880s.

the 3033 started out being 168-3 wiring diagram mapped to faster chips. originally it was going to be about 20percent faster than the 168-3 3mips. there was some subsequent optimization work with 3033 shipping to customers at about 4.5 mips (50 percent faster than 168-3 3mips).

for other drift ... recent post on relatively recent business/cobol application optimization
https://www.garlic.com/~lynn/2006s.html#24 Curiosity CPU % for COBOL program

as mentioned in the above ... a lot of the performance tuning, optimization, and workload profiling work at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in the early 70s evolved into things like capacity planning
https://www.garlic.com/~lynn/submain.html#bench

in the 70s & 80s ... increasing number of business operations started offering online & realtime transactions. however, for various reasons the online/realtime transaction only partially completed the full business process ... with the remainder of the process being left for overnight serial batch operations (numerous settlement and aggregation operations were found to be significantly more efficient done in serial batch processes ... than attempting to coordinate in the online/realtime environment)

in the 90s, an increasing number of these business operations (especially financial institutions) were experience servere constraint with their overnight batch window. part of this was ever increasing number of online/realtime transactions that depended on overnight completion. another part was that as they went national and/or globally, the time for the overnight batch window was shrinking.

numerous efforts were started in the 90s and still continue today (with some interruption for all the y2k remediation) for things like straight through processing ... i.e. the complete business process is done as part of the online/realtime operation (eliminating the dependency on the overnight batch window).

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Thu, 19 Oct 2006 12:17:13 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the 3033 started out being 168-3 wiring diagram mapped to faster chips. originally it was going to be about 20percent faster than the 168-3 3mips. there was some subsequent optimization work with 3033 shipping to customers at about 4.5 mips (50 percent faster than 168-3 3mips).

re:
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?

the other thing that happened in the 303x time-frame was the appearance of 148 and then 4341s as large scale distributed processing ... corporations buying 4341s in quantities of hundreds at a time ... and some amount of the datacenter processing leaking out onto these distributed processors (larger number of 4341s than vax ... although 4341s also sold into similar mid-range market segment as vax).

also, as mentioned before, 4341 was faster/cheaper than 3031 and clustered 4341s providing higher aggregate thruput than 3033 at less cost (which provided for some inter-plant contention).

a few misc. recent posts
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006l.html#2 virtual memory
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#17 virtual memory
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#40 virtual memory
https://www.garlic.com/~lynn/2006o.html#51 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#13 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006p.html#34 25th Anniversary of the Personal Computer
https://www.garlic.com/~lynn/2006p.html#36 25th Anniversary of the Personal Computer

then later with PCs ... there was other leakage of datacenter applications out onto PCs, first various forms of terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

and then client/server. then the larger workstation/PCs in the mid-80s started also to take over the mid-range market segment (4341s, vaxs) ... as well as various kinds of distributed computing ... which was evolving into 3-tier computing and things like middleware
https://www.garlic.com/~lynn/subnetwork.html#3tier

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Thu, 19 Oct 2006 15:43:54 -0600
hancock4 writes:
CPU cycle time was only one factor in performance. I believe by 1980 "wallclock" or "throughput" studies were made to see how a particular mix of jobs flowed.

Simply adding more memory could help reduce virtual paging or allow frequently used CICS modules to remain resident so they wouldn't have to be reloaded over and over again.

In those years communication lines were evolving from analog to digital with higher quality and faster speed; this helped speed up online response time.

I didn't keep up with DASD, but presumably faster disks and I/O channels replaced the 3330.


re:
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#41 Ranking of non-IBM mainframe builders?

168-3 used chips that avg. 4-circuits per chip .... the 3033 was using chips that were faster ... but avg. something like 40-circuits per chip. straight-forward mapping of 168-3 wiring diagram ... would only use 10percent of the new chips ... and got about 20percent MIP gain over 168-3 (i.e. 3mips to approx 3.6mips) part of the additional performance gain to 4.5mips was optimizing critical logic to do more stuff on the same chip (more operations performed on the same chip before going off-chip).

part of the issue with 4341 vis-a-vis 3033 ... was you could get cluster of six 4341s ... each about 1mip ... and each with 16mbytes ... for aggregate of 96mbytes total.

the 3033 was 24bit addressing ... and initially limited to 16mbytes (both real and virtual) ... putting 3033 at significant disadvantage vis-a-vis 4341 cluster. eventually there was a hacked done for 3033 that allowed 32mbytes of real storage (even tho you only had 24bit/16mbyte addressing) ... posting going into detail of the 32mbyte hack for 3033:
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces

note however, MVS was starting to have a different severe problem. MVS started out with 16mbyte virtual address space ... with 8mbytes mapped to the MVS kernel and supposedly 8mbytes left for application. However, the OS/360 programming convention made extensive/heavy use of pointer-passing APIs. In the OS/360 real memory environment and the VS2/SVS everything resided in the same address space. In the transition from SVS (single virtual storage) to MVS (multiple virtual storage) ... several non-kernel system subservices were placed in there own virtual address space. An example is JES2 ... (the HASP descendent) which had difficulty when it was passed an address pointer (from an application virtual address space) and needed to directly access the data in the original virtual address space. for a little drift, numerous collected hasp/jes2 posts
https://www.garlic.com/~lynn/submain.html#hasp

the MVS hack for this started out reserving part of the "application" 8mbyte as comman or shared storage in every application address space (where data could be moved and then accessed by pointer parameter from any address space) ... i.e. the "common segment". The issue for large installations with several of non-kernel "subsystems" ... common segments area frequently had grown to 5mbytes ... leaving only 3mbytes for actual application use. to try and get around part of this problem, "dual-address" and "cross-memory" services was introduced for the 3033 ... semi-privileged instructions that allowed subsystems to use pointers to access memory in different virtual address spaces. some recent posts discussing dual-address space posts
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006r.html#32 MIPS architecture question - Supervisor mode & who is using it?

late 70s most installations were still heavily using 3330-11 (200mbyte disks). 3350s had been introduced ... but i'm not sure there was extremely heavy migration from 3330-11 to 3350s. the new 3380 drives weren't announce until 11jun80 with first customer ship 16oct81
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380.html

from above:
The new film head technology combined with a more compact design enabled IBM engineers to reduce power consumption on the 3380 by up to 70 percent, floor space by up to 65 percent and heat generation by up to 75 percent when compared to equivalent storage in IBM 3350 DASDs. Read and write heads, disks and two actuators were integrated into two head/disk assemblies to improve reliability and efficiency.

... snip ...

note that the above article also talks about being able to attach to 303x processors (actually 303x "channel director") that supported 3mbyte/sec data streaming. however, it then turns around and says that the "speed matching buffer" allowed attaching to 1.5mbyte/sec channels on 303x processors, as well as 158s & 168s.

for a little drift, some discussion of whether 2305-1 operated at 3mbyte/sec (w/o data streaming).
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#33 Why magnetic drums was/are worse than disks ?

however, as per my frequent posts ... disk/dasd technology was increasing in performance at lower rate than other system components; disk/dasd relative system performance have been signficiantly declining ... to componsate there was increasing use of electronic memory for various kinds of caching (to minimize disk use). some recent posts on the subject:
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives

and for other drift ... other posts on thin film heads:
https://www.garlic.com/~lynn/2002h.html#28 backup hard drive
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?

one of the stories involves using 3033 in (disk engineering product test lab) bldg. 15 for some of the modeling of the thin film heads ... including "air bearing" simulation (getting much better turn-around compared to turn-around that the simulation work had been getting from the 370/195 in bldg. 28). misc. other posts mentioning work with bldg. 14 (disk engineering) and bldg. 15 (disk product test)
https://www.garlic.com/~lynn/subtopic.html#disk

another extract from the referenced 3380 history article:
First customer shipments for all 3380 models were initially scheduled to begin in the first quarter of 1981. In March of that year, however, IBM reported that initial deliveries would be delayed because of a technical problem identified during product testing prior to customer shipment. Six months later, the problem was corrected and the 3380 was operating in IBM laboratories and customer test locations with outstanding performance and excellent overall reliability. The first customer shipment of a 3380 from the IBM General Products Division plant in San Jose, Calif., took place on October 16, 1981.

... snip ...

I've told some stories in the past about being involved in some problems with 3880 disk controller prior to first customer ship ... which were different than the problem referred to regarding the 3380 disk drives.

late 70s ... was still pretty much business as usual for terminal communication ... some transition from "BSC" 9600 baud (or 4800 baud) to "SDLC" 9600 baud ... however it wasn't that big a change.

In 1980 time-frame, we got involved with STL (bldg. 90 depts, compilers, databases, misc. other stuff) exceeding bldg capacity ... and they were going to move 300 people from the IMS (database) group to an offsite (leased) building ... with remote access back to the datacenter in bldg. 90. They found the available SDLC 9600 baud remote terminal solution totally unacceptable (compared to what they had been used to with local channel connectivity and "sub-second response" ...actually quarter second or less). we put together a channel extender solution (using HYPERchannel) over T1 (1.5mbit/sec) link ... that provided them with solution that they had been used to. misc. collected postings mentioning HSDT (high-speed data transport) project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

so another story i've repeated a number of times ... that occurred in the mid-80s (five years or so after the project to remotely relocate 300 people from the STL IMS group)
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor

the friday before I was to leave for business trip to the far east (contracting for some custom HSDT hardware) ... somebody in the communication group announced a new online discussion group on communication ... the announcement included the following definitions


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday on the wall of a conference room in the far east

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         <600mbits

... other discussion about 3272/3277 controller/terminal from the early/mid 70s having better response than the 3274/3278 controller/terminal introduced in the late 70s:
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

... i.e. i was doing systems in the late 70s with system response for 90th percentile of .11seconds ... comparing 3272/3277 with 3274/3278 along with typical TSO system response ... with typical CMS quarter second response ... and some stuff I was doing to get .11second response


hardware     TSO 1sec.    CMS .25sec.     CMS .11sec.
3272/3277        .086        1.086         .336            .196
3274/3278        .530        1.530         .78             .64

... the above is for direct "channel" attached controllers ... as opposed to remote controllers connected with 9600 baud links (9600 baud links were significantly worse).

Ranking of non-IBM mainframe builders?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Thu, 19 Oct 2006 16:07:53 -0600
hancock4 writes:
Simply adding more memory could help reduce virtual paging or allow frequently used CICS modules to remain resident so they wouldn't have to be reloaded over and over again.

re:
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#41 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

and for some totally different drift. i got to play with a lot of stuff at the univ. as undergraduate. three guys from cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had come out to install cp67 (virtual machine operating system) the last week in jan68 ... and as a result I got to rewrite a lot of the kernel ... for one thing or another.

however, there was another project at the univ. library which had gotten a digital library grant from ONR ... and was selected to be one of the CICS betatest sites. CICS had originally been developed at customer installation ... and then was picked up to ship as a product. Before general availability, some number of sites were selected to betatest the product (the univ. being one).

i got involved in doing some of the CICS support and debugging. Turned out that pieces of CICS had been done specific to the original customer environment ... which ran into various kinds of problems and failures trying to move it to totally different environments. One bug I remember shooting had to do with CICS having been tested with one flavor of BDAM ... but the library was using another flavor of BDAM ... precipitating a CICS problem processing OPEN.

misc. recent posts mentioning cics
https://www.garlic.com/~lynn/2006i.html#0 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#12 Mainframe near history (IBM 3380 and 3880 docs)
https://www.garlic.com/~lynn/2006l.html#31 Google Architecture
https://www.garlic.com/~lynn/2006o.html#29 oops, cics

collected posts mentioning bdam and/or cics
https://www.garlic.com/~lynn/submain.html#bdam

Universal constants

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Universal constants
Newsgroups: alt.folklore.computers
Date: Thu, 19 Oct 2006 16:48:25 -0600
wclodius writes:
I remember being taught, several decades ago, about Japanese intervention in Manchuria then China proper, with even briefer references to the rape of Nanking, but it was glossed over compared to Europe. I don't remember being taught much, if anything, about our pre Pearl Harbor relations with Japan, or the Japanese takeover of Indochina and Indonesia, except pointers to maps saying this is what they had before Pearl Harbor, and this is what they had by Midway.

my wife's dad was engineering combat group ... and towards the end, a number of times he was ranking officer into enemy territory and acquired various kinds of officer daggers as part of surrenders.

after the end of hostilities ... he was posted to nanking as an adviser and took along his family. my wife has stories about being picked up in the american compound and being driven to the british compound to play with some of the diplomats' daughters there. she also has some stories about kids sneaking out of the compound and wandering around nanking.

my wife's mother relates a story about the family being evacuated in an army cargo plane on three hrs notice (when the city was surrounded) to airfield at tsingtao (arriving after dark, they had a collection of vehicles lighting the air field with headlights). we have acquaintance that tells of being evacuated later (in army cargo plane) from nanking when the airfield was surrounded.

they then lived on the repose in tsingtao harbor for a couple months before being moved state-side. few past posts mentioning nanking, tsingtao, and/or repose:
https://www.garlic.com/~lynn/2004e.html#19 Message To America's Students: The War, The Draft, Your Future
https://www.garlic.com/~lynn/2005r.html#3 The 8008
https://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#33 IBM 610 workstation computer

I recently scanned quite a few letters my wife's mother had written her mother from the far east.

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch
Date: Thu, 19 Oct 2006 19:24:24 -0600
"Dennis Ritchie" <dmr@bell-labs.com> writes:
Some models of the Eagle had one moving head per surface, plus a multi-head, non-movable region.

re;
https://www.garlic.com/~lynn/2006s.html#23 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#31 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#33 Why magnetic drums was/are worse than disks ?

3350 was normal moving arm disk ... but there was an option where you could get a few cylinders with fixed-heads.
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3350.html

description from above:
The 3350 Models A2F and B2F provided 1,144,140 bytes of zero seek time storage per spindle (2,288,280 per unit) when operating in 3350 native mode.

... snip ...

there was an issue that the 3350 only had a single device address. as a result, if you were doing an arm motion operation ... and a request arrived for the fixed head area ... it would be stalled until the arm motion operation completed.

by comparison, the fixed head disk 2305s had eight separate logical device addresses ("exposures")
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

the effect of these "exposures" met that eight separate i/o requests could be "queued" (one per exposure) ... and then the controller would choose to perform the operations in optimal order (based on things like rotational position and records rotating under the heads).

i tried to get a hardware modification to support similar multiple exposure for 3350 fixed-head area ... allowing transfers to/from the fixed-head area (primarily for paging operations) overlapped with any arm motion operation.

this got me into some corporate politics ... there was another group that thot it was going to be producing a product targeted for virtual memory paging ... and took exception with my trying to improve 3350 fixed-head option as possibly being in competition with what they were doing. that got my 3350 multiple exposure effort shelved ... however, their product (code name "vulcan") eventually was killed before it ever shipped

also from the above 3350 page:


Average seek time (ms):         25
Average rotational delay (ms):  8.4
Data Rate (KB/sec.):            1198
Bytes per track:                19,069
Tracks per logical cylinder:    30
Logical cylinders per drive:    555
Capacity per drive (MB) approx. 317.5

Features

• Rotational position sensing, which permitted improved block
multiplexer channel utilization.

• Error correction of single data error bursts of up to four bits.

• Command retry, which enabled the storage control to recover from
  certain subsystem errors without recourse to system error
recovery procedures.

• Read only switch, gave increased data security by providing for
each drive the means to protect data from being overwritten or
erased.

... snip ...

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 07:39:13 -0600
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
There is a very unsurprising correspondence between COBOL printer carriage control statements and IBM printer CCW command code modifier bits; IBM spoolers supported ASA carriage control as an option, often RECFM=FBA and FBM: converted ASA to appropriate machine carriage control (printer CCW command codes).

See Lynn's HTML conversion of the IOS3270 Green Card under CCWs/Write and Printer/Command Codes.

Only ever used machine carriage control in printer CCWs from product exits in jobs running under POWER spooling on DOS/VSE.

Exits were the only places I ever used CCWs directly: you couldn't rely on any logical I/O devices being available or datasets being open or available so had to do direct device I/O with CCWs (and get it right every time!)


ref:
https://www.garlic.com/~lynn/2006s.html#39 Why these original FORTRAN quirks?

note that while both RECFM=M and RECFM=A stripped the first byte for printer "carriage control" ... and RECFM=M was directly the equivalent channel program CCW "op code" ... the semantics were different. RECFM=M moved the carriage after writing the data, RECFM=A moved the carriage before writing the data. The conversion of RECFM=A to the equivalent channel program CCW "op code" not only was code conversion ... but also semantics conversion.

one might argue that RECFM=A semantics would make it easier to process in hardware as stream of data ... since the control information was at the start of the line ... any hardware implementation could slurp the leading control byte and immediately perform the operation before printing the rest of the line (while RECFM=M resulted in the operation being performed after the line was written).

5692 and 6SN7 vs 5963's for computer use

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5692 and 6SN7 vs 5963's for computer use
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 10:11:33 -0600
"Ancient_Hacker" <grg2@comcast.net> writes:
Are you sure about this? The actual circuitry, with discrete transistors, of oscillators, counters, gates, and bus drivers shouldnt have taken more than a few square feet of PC board. Maybe a 5-inch high rack-mount PCB cage with a dozen small PC boards..

Now maybe it came mounted in a whole separate rack, but a whole rack filled with circuitry for this seems a bit improbable.


CTSS/7094 system may have had some sort of external clock box ... because it appears as if something similar was carried forward for cp67 (on science center 360/67) ... and even implemented as a virtual device ... for a long time, cp67 virtual machine configurations tended to have the psuedo timer device at address x'0FF'.

...
GH20-0856-0

Control Program-67/Cambridge Monitor System (CP-67/CMS) Version 3 Program Number 360D-05.2.005 CP-67 Operator's Guide

CP-67 is multiaccess system which allows multiple System/360 operating systems to run under it concurrently in a time-shared mode. These operating systems run in the same manner as they do on a dedicated System/360 computer. Some systems that have run under CP-67 are CMS, CMS Batch, DOS, APL\360, RAX, and CP-67.


... and on pg. 52


Figure 4. Example of a Virtual CP-67 Directory

...

OPERATOR USER   CSC     ,A6230    ,A,5
         CORE   256K
UNIT   009,1052
         UNIT   00C,2540R
UNIT   00D,2540P
UNIT   00E,1403
UNIT   0FF,TIMR                 <---------
UNIT   190,2314,CMS190,000,053,RDONLY
UNIT   191,2314,SRG001,020,022
UNIT   19A,2314,19ASYS,000,053,RDONLY
*EOU*

... snip ...

.... trivia question ... what was special about 2314 minidisk with 54 cylinders?

misc. disk storage reference
https://en.wikipedia.org/wiki/Early_IBM_disk_storage

although above fails to mention 3350s
https://www.garlic.com/~lynn/2006s.html#45 Why magnetic drums was/are worse than disks ?

... above does mention the sales of disk operation to Hitachi in 2002
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives

....

in any case, back to trivia ... hint ... 2314 capacity was 29.2mbytes (raw, unformated) in 200 cylinders ... 54 cylinders would be 7.9mbytes (raw, unformated). is there a device listed in the wikipedia reference that is approx. 7-something mbytes (raw, unformated) in capacity?

5692 and 6SN7 vs 5963's for computer use

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5692 and 6SN7 vs 5963's for computer use
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 10:59:13 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
in any case, back to trivia ... *hint* ... 2314 capacity was 29.2mbytes (raw, unformated) in 200 cylinders ... 54 cylinders would be 7.9mbytes (raw, unformated). is there a device listed in the wikipedia reference that is approx. 7-something mbytes (raw, unformated) in capacity?

re:
https://www.garlic.com/~lynn/2006s.html#47 5692 and 6SN7 vs 5963's for computer use

and for other 2314 drift at
https://en.wikipedia.org/wiki/Early_IBM_disk_storage

from above:
The original Model 1 consisted of the 2314 control unit, a 2312 single drive module, and two 2313 four drive modules for a total of 9 disk drives. Only eight drives of the nine were available to the user at any one time. The ninth drive was there for a spare for the user and could also be worked on 'offline' by a Field Engineer while the other drives were in use by the customer. Each of the nine drives were mounted in individual drawers that were unlatched and pulled out to access the Disk Pack. Because of their appearance they picked up the nickname of 'Pizza Ovens'

...

so this has picture of 9 disk drive cabinet
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2314A.html

the 2314 disk packs were removeable and there was frequent swapping of disk packs. the 9th drive wasn't so much a spare ... but was used for staging 2314 disks being mounted.

the operator would take the disk pack to be mounted and place it on top of the cabinet and open the empty/idle drive drawer, then screw the disk pack into the drive, close the drawer and hit the power-up switch.

another picture of 2314
http://www.staff.ncl.ac.uk/roger.broughton/DASD/200426.htm

the above shows a drawer being opened.

when the system was ready to switch packs ... the operater then could just pop out the address plug (from the drive with the pack being removed) and pop it into the drive that had the pack just loaded. then the other drive then could be powered down, and its pack removed.

above each pair of drive drawers can be seen the panel with stop/start switch, ready light, and address plug for each drawer. the round address plug was a couple inches long and would be removed from one drive and plugged into any other drive panel.

(similar description is given at the above URL).

and some comparison between 2314s, 3330, 2301, 2305-1 and 2305-2
http://www.research.ibm.com/journal/rd/161/ibmrd1601C.pdf

Why these original FORTRAN quirks?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 12:54:35 -0600
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
It sounds good in theory. But in actual fact the hardware was built to print first, then space, from a single command. The spacing took place without any further program intervention - your program could be setting up the next line while the printer was spacing. To implement spacing before printing, your program would have to issue two commands: one to do the spacing, then one to print the line. This was not only inefficient, but slower. The Univac 9300's bar printer was an extreme case: since a print cycle could start only when the bar was at one end of its travel, space-then-print would cut printing down to half speed. We very quickly learned not to do this.

A clever spooler (or line printer driver, for that matter) could convert space-then-print to print-then-space to recover lost speed.


ref:
https://www.garlic.com/~lynn/2006s.html#46 Why these original FORTRAN quirks?

so the earlier post referenced in the above
https://www.garlic.com/~lynn/2006s.html#39 Why these original FORTRAN quicks?

discusses the conversion technique in some detail ... not only the "brain-dead" scenario with separate CCWs ... but also collapsing the paired CCWs into the inverse ... aka, from separate print/control operations to combined control after print operation.

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 13:19:31 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the friday before I was to leave for business trip to the far east (contracting for some custom HSDT hardware) ... somebody in the communication group announced a new online discussion group on communication ... the announcement included the following definitions

low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday on the wall of a conference room in the far east

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         <600mbits

re:
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

following email was after the NSFNET backbone program announcement ... a copy can be found here (28mar86):
https://www.garlic.com/~lynn/2002k.html#12

RFP awarded 24nov87 and RFP kickoff meeing
https://www.garlic.com/~lynn/2000e.html#email880104 Kickoff Meeting 7Jan1988

where NSFNET backbone basically became the operational precursor to modern internetworking

Erich Bloch was director of National Science Foundation for much of the 80s ... and Gorden was doing a stint at NSF (for Eric?)

from long ago and far away ...
Date: 15 May 1987, 14:04:05 PDT
From: wheeler

Erich Bloch & Gorden Bell at NSF are pushing strongly for the HSDT technology ... either directly from IBM or getting IBM to release it to some other corporation (possibly even a start-up). A new wrinkle turned up this week. NSF is setting up a new program to involve & fund industry in leading edge research. IBM has said that they wish to participate activiely in the new program. Bloch & Bell are pushing the appropriate IBM people that HSDT may be one of the few things that IBM could do to get immediately started in this area. As a result there may be another round of IBM reviews (except it is not the first time for any of the reviewERS or the reviewEES).


... snip ... top of post, old email index, NSFNET email

note that it was external visibility like this that got HSDT into political problems with the communication group ... in part, because there was such a huge gap between what they were doing and what we were doing.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and from a year earlier ... this was in the time-frame of the release of the NSFNET backbone program announcement (referencing letter from NSF to three top corporate executives)
Date: 04/17/86 09:09:01
From: wheeler

apparently Erich Bloch finally dropped his bomb shell. Apparently he did it with a letter to xxxxx, xxxxx and xxxxx (I haven't seen it yet). It supposedly goes on at great length about HSDT being the most leading edge & best technology anywhere in existance and that the NSF and the country just has to have it ... then for a closing sentance it says something about ***REDACTED***


... snip ... top of post, old email index, NSFNET email

misc. other postings mentioning the communication groups new discussion group announcement:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#25 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005r.html#9 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Oct 2006 14:12:53 -0600
ref
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
and
https://www.garlic.com/~lynn/subnetwork.html#hsdt

... oh, misc. other posts mentioning HSDT & NSFnet
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/internet.htm#0 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2003c.html#46 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003g.html#36 netscape firebird contraversy
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2005n.html#28 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#6 What are the latest topic in TCP/IP
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#12 Barbaras (mini-)rant
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
https://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006r.html#6 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006s.html#20 real core

5692 and 6SN7 vs 5963's for computer use

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 5692 and 6SN7 vs 5963's for computer use
Newsgroups: alt.folklore.computers
Date: Sat, 21 Oct 2006 08:28:23 -0600
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
2311? (200 cylinders, 10 tracks per cylinder, 3625 bytes per track, 7.25 megabytes per pack)

trivia question
https://www.garlic.com/~lynn/2006s.html#47 5692 and 6SN7 vs 5963's for computer use
https://www.garlic.com/~lynn/2006s.html#48 5692 and 6SN7 vs 5963's for computer use

correct ... there were some round-off with the way cms formated the tracks with 800-byte blocks (number of 800-byte blocks you could get on 2311 track vis-a-vis on 2314 track)

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: alt.folklore.computers, bit.listserv.ibm-main
Date: Sat, 21 Oct 2006 09:16:47 -0600
jbaker314@COMPORIUM.NET (John P Baker) writes:
Reentrancy may be preferred, but it is not always reasonable or even possible. Each situation must be evaluated on its own merits.

so possibly not just simple reentrancy ... but also thread/multitasking *safe* ... including use of compare&swap semantics

slightly related posts from a.f.c.
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?

including needing new programming paradigm to leverage parallel thruput on the increasingly multiprocessor machines.
https://www.garlic.com/~lynn/2006s.html#19 Very slow booting and running and brain-dead OS's?

as i've mentioned before, charlie had been doing a lot of work on multiprocessor fine-grain locking on cp67 kernel at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and invented the compare&swap instruction (original mnemonic chosen because CAS are charlie's initials). initial attempt at getting compare&swap into 370 architecture was rebuffed by the (370 architecture) redbook owners. they effectively said that the mainstream operating system (i.e. os/360 derivatives) were doing just fine in their multiprocessor support carrying over the test&set instruction from 360 (and extremely course-grain locking).

the challenge was that to get compare&swap instruction into 370 ... it needed to have uses that were non-multiprocessor specific. thus was born the compare&swap description for multitreaded/multitasking ... originally including in the programming notes for compare&swap ... since moved to the appendix of principles of operation

A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320

from above:
When two or more programs sharing common storage locations are being executed concurrently in a multiprogramming or multiprocessing environment, one program may, for example, set a flag bit in the common-storage area for testing by another program. It should be noted that the instructions AND (NI or NC), EXCLUSIVE OR (XI or XC), and OR (OI or OC) could be used to set flag bits in a multiprogramming environment; but the same instructions may cause program logic errors in a multiprocessing configuration where two or more CPUs can fetch, modify, and store data in the same storage locations simultaneously.

Subtopics:

• A.6.1 Example of a Program Failure Using OR Immediate
• A.6.2 Conditional Swapping Instructions (CS, CDS)
• A.6.3 Bypassing Post and Wait
• A.6.4 Lock/Unlock
• A.6.5 Free-Pool Manipulation
• A.6.6 PERFORM LOCKED OPERATION (PLO)


... snip ...

misc. collected posts related to multiprocessor support
https://www.garlic.com/~lynn/subtopic.html#smp

and some other recent threads touching on programming techniques (including mention of PLI program that I had written in the early 70s that analyzed 360 assembler listings ... control flow, register usage, etc ... and then attempted to generate higher level program representation)
https://www.garlic.com/~lynn/2006e.html#32 transputers again was: The demise of Commodore
https://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006r.html#24 A Day For Surprises (Astounding Itanium Tricks)
https://www.garlic.com/~lynn/2006s.html#27 Why these original FORTRAN quirks?

Why these original FORTRAN quirks?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Sat, 21 Oct 2006 09:43:14 -0600
Joe Morris <jcmorris@mitre.org> writes:
Don't forget that we're talking about batched processing, on hideously expensive machines. Even if it's just "funny money" each run had a visible economic cost, which provided an incentive to reduce the number of runs wasted because of an error that could/should have been caught in desk debugging.

as well as possible significant turn around latency involving trivial mistakes. lost time/productivity could also represent a significant business cost.

i'm glad i got exposed to cp67 timesharing and online terminals fairly early ... i first got home terminal with online access in mar70 and have essentially had home online access continuously since then.
https://www.garlic.com/~lynn/submain.html#timeshare

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: alt.folklore.computers, bit.listserv.ibm-main
Date: Sat, 21 Oct 2006 12:08:29 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320

from above:

When two or more programs sharing common storage locations are being executed concurrently in a multiprogramming or multiprocessing environment, one program may, for example, set a flag bit in the common-storage area for testing by another program. It should be noted that the instructions AND (NI or NC), EXCLUSIVE OR (XI or XC), and OR (OI or OC) could be used to set flag bits in a multiprogramming environment; but the same instructions may cause program logic errors in a multiprocessing configuration where two or more CPUs can fetch, modify, and store data in the same storage locations simultaneously.

Subtopics:

• A.6.1 Example of a Program Failure Using OR Immediate
• A.6.2 Conditional Swapping Instructions (CS, CDS)
• A.6.3 Bypassing Post and Wait
• A.6.4 Lock/Unlock
• A.6.5 Free-Pool Manipulation
• A.6.6 PERFORM LOCKED OPERATION (PLO)
... snip ...


re:
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?

... as to having people well versed with "A.6.1" problems ... there was a rumor in the late 70s that the hardware engineers were approached by MVS group to make the immediate instructions atomic on multiprocessor machines. MVS was moving kernel from the old-style os/360 single global kernel/supervisor lock to higher degrees of kernel parallelism and were having a devil of a time converting all the non-atomic immediate instruction coding for parallel operation.

... having re-entrant programming supporting multiple concurrent operation ... including supporting multiple concurrent operation in same address space (aka threading/multitasking) ... for highly parallel operation.

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Sat, 21 Oct 2006 16:52:29 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
i've been looking for a way to read the 100 or so 5.25 in diskettes that I still have. I've got four diskettes that are say they are backup versions of various versions of turbo pascal. I've got turbo pascal installation diskettes

v2 ... copyright 1983 v3.01a ... copyright 1983 no version number, but says copyright 1987 v5.5 ... copyright 1987, 1989

i've got three different sets of turbo c installation diskettes (but they don't carry version numbers).

four diskettes copyright 1986 (black label with pink? corner symbol) ide, command line/utilities, header files/libraries/examples, libraries/examples

five diskettes copyright 1986 (black label with blue corner symbol) ide, command line/utilities, header files/libraries, libraries, examples

six diskettes copyright 1987, 1988 (yellow label)


re:
https://www.garlic.com/~lynn/2006s.html#35 Turbo C 1.5 (1987)

got my hands on 5.25in diskette drive.

black label w/pink corner diskettes ... all files are dated 6-3-87 comes up but doesn't give a version number (also some of the help says something about how to run under dos3.2)

black label w/blue corner diskettes ... all files dated 1-25-88 comes up and says that it is C 1.5

yellow label diskettes all files dated 8-29-88 comes up and says that it is C 2.0

... i installed diskette drive in a really old machine that had empty bay with diskette drive cable that had edge connecter ... but hard disk had died ... i've got to do some fiddling to get a hard disk formated and working in the machine.

3.5diskette dos4 and dos6 boot fine on the machine ... and recognizes the 1.2mb diskette drive. i've tried booting live knoppix cdrom ... it recognizes the 1st floppy but not the 2nd diskette drive.

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Sat, 21 Oct 2006 20:44:38 -0600
re:
https://www.garlic.com/~lynn/2006s.html#56 Turbo C 1.5 (1987)

two spare hard drives i have are both bad. i used (live cdrom) knoppix to read 360kbyte floppies ... zip the files (one zip file per floppy) and move it to another machine (knoppix doesn't recognize the enet card in the machine)

i've got numerous more (40 track, 9sector/track) 360kbyte diskettes ... but also a lot formated 80track, 10sector/track, 800kbyte diskettes ... which is non-standard size for knoppix and it isn't able to read them correctly (closest it has is 80track, 11sector/track, 880kbyte, which doesn't work). however, dos6 booted from 3.5in diskette does read the 80track diskettes correctly

some of the files i have at the moment. everything seemed to go ok ... except there were two read errors on the second turbo c 1.5 diskette involving grep.exe, setargv.asm tcconfig.exe and touch.com (disk 2 is command line & utilities)


164464 Oct 21 16:53 tc1d1.zip
144158 Oct 21 16:54 tc1d2.zip
143287 Oct 21 16:55 tc1d3.zip
143108 Oct 21 16:55 tc1d4.zip
171452 Oct 21 16:50 tc15d1.zip
144032 Oct 21 16:49 tc15d2.zip
153429 Oct 21 16:50 tc15d3.zip
154775 Oct 21 16:51 tc15d4.zip
156456 Oct 21 16:52 tc15d5.zip
180397 Oct 21 17:49 tc2d1.zip
185546 Oct 21 17:49 tc2d2.zip
171704 Oct 21 17:49 tc2d3.zip
165754 Oct 21 17:49 tc2d4.zip
144501 Oct 21 17:49 tc2d5.zip
250997 Oct 21 17:49 tc2d6.zip
204976 Oct 21 17:11 tp1987d1.zip
117238 Oct 21 17:12 tp1987d2.zip
123612 Oct 21 17:13 tp1987d3.zip
119945 Oct 21 17:14 tpv301.zip

IA64 and emulator performance

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IA64 and emulator performance
Newsgroups: comp.arch
Date: Sun, 22 Oct 2006 08:15:37 -0600
jgd@cix.co.uk (John Dallman) writes:
Well, I have the impression that the price of Itanium chips is only 3-4 times the price of x86, for vastly smaller production volumes - but can;t find the Itanium prices online at present, so the difference could be bigger. If that's true, I feel reasonably confident that Intel are, overall, loosing money on it. The cutbacks in development will have ameliorated that problem somewhat, but mean that Itanium is gradually losing performance competitiveness.

One strongly doubts that HP are paying Intel /more/ than the rate at which anyone can buy Itaniums.


other than the upfront design costs ... the costs are per wafer ... modulo some stuff about the number of steps/layers. if the chips share same process/line ... and the cost of the line is covered ... then the wafer price is relatively the same as long as you have at least a minimum sized wafer lot run.

so first level approximation comparison can be number of chips per wafer.

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 22 Oct 2006 08:41:42 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
i tried to get a hardware modification to support similar multiple exposure for 3350 fixed-head area ... allowing transfers to/from the fixed-head area (primarily for paging operations) overlapped with any arm motion operation.

this got me into some corporate politics ... there was another group that thot it was going to be producing a product targeted at the virtual memory paging ... and took exception with my trying to improve 3350 fixed-head option as possibly being in competition with what they were doing. that got my 3350 multiple exposure effort shelved ... however, their product (code name "vulcan") eventually was killed before it was ever shiped


re:
https://www.garlic.com/~lynn/2006s.html#45 Why magnetic drums was/are worse than disks ?

when i was undergraduate ... I had rewritten large parts of the cp67 kernel ... including much of the paging infrastructure (page replacement, i/o scheduling, allocation). this included being able to "move" active pages from lower performance to higher performance devices ("migrate up"). some of this carried forward in the morph to vm370.

later for vm370, i release the vm370 "resource manager" as a separate product. while this primarily was focused on resource scheduling, it also had a lot of structural changes (that were in part multiprocessor oriented), a bunch of reliability stuff (including new internal kernel serialization mechanism that eliminate all known causes of zombie/hung processes) ... and ... "page migration" ... which moved inactive pages from high performance devices to lower performance devices.

when i was trying to get 3350 hardware change to support multiple exposures, i also redid the internal control block structure for page allocation on disk ... from being purely device oriented to being arbitrary device sub-areas ... having any organization. the default structure was all paging areas on a device ... following the previous purely device "speed" organization. However, it also allowed organizing the 3350 fixed head area on equivalent round-robin level with other fixed-head areas (like 2305 fixed-head disks). the combination of the 3350 hardware change for multiple exposures, the various page migration pieces, and the redo of the allocation control block struction (allowing arbitrary storage allocation policies) ... made the 3350 fixed head area significantly more useful.

a large installation might have three 2305-2 for paging @12mbyte ... giving a total fixed-head paging allocation of 36mbytes.

a large 3350 installation might have four to six strings of 3350s (with eight 3350 drives per string ... and each string on a different i/o channel). with 1mbyte per 3350 fixed-head area ... that would yield 8mbytes fixed-head paging per string ... or 32mbytes to 48mbytes total fixed-head for a large 3350 installation of four to six 3350 strings.

misc. postings mentioning resource manager, scheduling, etc
https://www.garlic.com/~lynn/subtopic.html#fairshare
and postings mentioning page replacement
https://www.garlic.com/~lynn/subtopic.html#wsclock

one of the other characteristics of the resource manager was it was the guinea pig for charging for kernel software. the 23jun69 unbundling announcement starting charging for application software; however kernel software was still free ... on the excuse that it was required for the operation of the hardware. a number of factors during the 70s contributed to the move to also start charging for kernel software ... and my resource manager was chosen to be the guinea pig ... and i had to spend a lot of time with the business people on policies and practices for kernel software pricing. misc. posts mentioning the transition from free software to starting to charge for software
https://www.garlic.com/~lynn/submain.html#unbundle

other posts in this thread
https://www.garlic.com/~lynn/2006s.html#23 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#31 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#33 Why magnetic drums was/are worse than disks ?

IA64 and emulator performance

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IA64 and emulator performance
Newsgroups: comp.arch
Date: Sun, 22 Oct 2006 08:49:18 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Assuming a very low failure rate. If the failure rate is high (as it is reported to be for the Itanium), the proportion of failures is likely to be pro rata to the size, so it is the square of that.

Whatever. I don't thing that the production costs are a big deal, in the overall scheme of that CPU.


re:
https://www.garlic.com/~lynn/2006s.html#58 IA64 and emulator performance

so i should have mentioned working chips per wafer ... however, if volume is sufficiently low ... amortizing the upfront design and other ancillary costs per chip ... will dominate the per chip production costs.

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 22 Oct 2006 09:39:11 -0600
edjaffe@PHOENIXSOFTWARE.COM (Edward Jaffe) writes:
Write-protected subpools?! No such thing!

I mentioned the CsvRentSp252 DIAG trap earlier in this thread.. What that does is put RENT code into subpool 252, which is key zero storage. Therefore, programs running in PSW key zero can modify SP 252 storage.

To get complete protection of all RENT modules, you must use the CsvRentProtect DIAG trap. That uses PGSER PROTECT to protect the modules once they're loaded. I don't recommend that setting on systems older than z/OS V1R8 because there are several popular IBM programs, residing in SP 252, that "legitimately" update themselves and whose module names don't appear in the exception table until that release.

[Disclaimer: DIAG traps are not intended for use on production systems.]


the inverse of this is sort of one of the problems left over from 370/165 falling behind schedule with 370 virtual memory hardware ... and picking up six months by dropping a bunch of stuff that was in the original 370 virtual memory architecture (and the "mainstream" pok operating system people stating they couldn't really see any use for it anyway). this then met that the other 370 models (many that had already had finished 370 virtual memory architecture implementation) had to be retrofitted to conform to what 370/165 was implementing.

one of the things that got dropped out of this (read-only) segment protect. one of the nice things about having read-only protection at the segment table level ... was that some address spaces could have protection turned on and other address spaces w/o protection ... for the same area.

in cp67, shared pages was offered for cms (and other) virtual address spaces by playing games with the storage protect keys ... and making sure that no virtual address space PSW was really dispatched with key zero.

with the appearance of segment protect with 370 virtual memory architecture, cms was restructured (along with appropriate vm370 changes) to align r/o protected areas on segment boundaries. then came the 370/165 bomb shell for 370 virtual memory architecture ... and the whole vm370/cms strategy had to refitted to implement the cp67 storage protect key scheme.

I had done a cms paged mapped filesystem originally on cp67 ... and ported it to vm370 ...
https://www.garlic.com/~lynn/submain.html#mmap

i introduced a lot of segment-based operations that could be done across different address spaces. one was that disk resident executables could be mapped directly to segment protected execution shared across multiple different address spaces ... and the segments could appear at arbitrary different virtual addresses in different address spaces. since a lot of cms application code was done based on os/360 derived compilers ... it was heavily loaded with os/360 relocatable address constant convention. now, this is one of the things that tss/360 had done right ... so that execution images on disk could be treated as strictly read-only (the image on disk and the executing image was exact) and still execute at an arbitrary virtual address (executable images contained no embedded address constants that had to be swizzled as part of loading for execution)
https://www.garlic.com/~lynn/submain.html#adcon

this also caused some perturbation in the original relational/sql implementation (all done as a vm370-based implementation).
https://www.garlic.com/~lynn/submain.html#systemr

where there was going to be some (virtual address space/)processes that had r/w access to the data ... but there was design that had application with access to some of the same data ... only unable to change that data. it was ideally designed to take advantage of the original 370 virtual memory architecture segment protection. however, the implemention then required some amount of fiddling for release as sql/ds.

for some trivia ... one of the people in the following meeting claimed to have been primary person handling sql/ds technology transfer from endicott back to stl for db2
https://www.garlic.com/~lynn/95.html#13

misc. past posts mentioning 370/165 hardware schedule problems implementing 370 virtual memory
https://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006l.html#22 Virtual Virtualizers
https://www.garlic.com/~lynn/2006m.html#26 Mainframe Limericks

Microsoft to design its own CPUs - Next Xbox In Development

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft to design its own CPUs - Next Xbox In Development
Newsgroups: comp.sys.super,comp.arch,comp.sys.intel,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.chips
Date: Sun, 22 Oct 2006 09:53:24 -0600
"Del Cecchi" <delcecchiofthenorth@gmail.com> writes:
Explain to me why Microsoft would want to take on the grunt work of chip design? If they have an opinion as to the high level design, surely the folks they choose to partner with would listen. After all there is the Golden Rule.

they are heavily into it with the game machines ... and they've already hired some number of people in the area.

current state of the art trying to achieve real-time and realism is requiring enormous software tricks to leverage the existing hardware implementations ... something akin to the speculation about new POWER6 chip possibly having highly skilled people doing large amount of very customed circuit optimization.

there is some speculation that by sufficiently orienting the hardware design to the task ... that advanced, realistic game implementations can be accomplished by a much larger percentage of the software programming population.

Microsoft to design its own CPUs - Next Xbox In Development

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft to design its own CPUs - Next Xbox In Development
Newsgroups: comp.sys.super,comp.arch,comp.sys.intel,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.chips
Date: Sun, 22 Oct 2006 12:57:15 -0600
krw <krw@att.bizzzz> writes:
The point is; why buy the cow if the milk is free? Chip design in expensive work. Chip fab even more so. Let others take the risks.

re:
https://www.garlic.com/~lynn/2006s.html#62 Microsoft to design its own CPUs - Next Xbox In Development

there are a fair number of fabless chip operations (spun-off and/or outsourced). the capital investment risk in the actual fabs is why some number of chip operations have spun them off.

there were some number of operations in the 70s/80s that went thru this transition period from custom/proprietary to cots (commercial off the shelf) ... where the cost trade-offs was to use less/in- expensive off-the-shelf chips and devote relatively scare talent to more critical parts of their business.

some of this could be considered in the light of the commerce dept. meetings on hdtv and competitiveness in the late 80s and early 90s ... supposedly whoever owned the custom hdtv chip business ... that there would be so many of these chips, that they would become commodity standard.

so given a relatively high-volume activity ... are there marginal, incremental activities that you can invest in that would still show a positive ROI. part of this may be that you have already invested in all the obvious stuff ... so it isn't directly a trade-off decision between where to apply scarce funding and skills; you've aleardy done all the more obvious things (both $$$ and skills). so now can you show incremental return-on-investment by moving into other parts of the activity; possibly considered as a form vertical integration.

so question is there enuf revenue ROI related to some specific target markets to moving into more vertical integration operation (and is there sufficient custom chip volume that it becomes somewhat akin to what numerous people feared was going to happen with hdtv).

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: bit.listserv.ibm-main
Date: Sun, 22 Oct 2006 12:39:03 -0600
gilmap@ibm-main.lst (Paul Gilmartin) writes:
Some non-IBM systems can mark segments as I-fetch only and D-fetch only. Does z/Series have this capability? It instantly traps on wild-branch-into-data. Might also provide a guideline for cache management.

the stack smashing and buffer overflow (highly correlated with numerous c language programming environments) somewhat recently led to d-fetch hardware feature only ... aka countermeasure to various attacks hiding instructions inside incoming data. d-fetch only wouldn't fix programming problems with allowing long data/string structures to overlay things they shouldn't ... but it would at least prevent the execution of any hidden instructions.

various flavors i-fetch (& execution) only hardware have been around for somewhat longer (execute-only as opposed to no-execute which is this later countermeasure to various vulnerabilities that have significantly higher occurance in c programming environments)

old post discussing 360 key fetch/store protection and emerging d-fetch only (no-execute) ...
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns

a few of the other no-execute posts (countermeasure for stack smashing & buffer overrun vulnerabilities)
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns

misc. posts with any mention of buffer overflow
https://www.garlic.com/~lynn/subintegrity.html#overflow

previous posts in this thread:
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006s.html#55 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant HLASM coding practices ever defensible?

Paranoia..Paranoia..Am I on the right track?.. any help please?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Paranoia..Paranoia..Am I on the right track?.. any help please?
Newsgroups: alt.computer.security
Date: Sun, 22 Oct 2006 15:29:37 -0600
tomas <tomas@kasdre.com> writes:
When I am ready to start again, I bring a clone of the original back into the container.

virtual machines are the new 40yr old thing ... starting with cp40 at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

with custom modified 360/40 with virtual memory hardware ... and then when standard 360/67 (w/virtual memory) became available in 1967 ... cp40 morphed into cp67.

the term commonly used in the 60s and 70s for this technique was padded cell (for isolating any possible bad behavior).

some of the padded cell terminology shows up periodically in the vmshare archives ...
http://vm.marist.edu/~vmshare/

online computer conferencing provided by tymshare to the SHARE orginization starting in the mid-70s ... on their virtual machine based commercial timesharing offering platform
https://www.garlic.com/~lynn/submain.html#timeshare

Why these original FORTRAN quirks?; Now : Programming practices

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?;  Now : Programming practices
Newsgroups: alt.folklore.computers
Date: Sun, 22 Oct 2006 18:50:36 -0600
Larry__Weiss <lfw@airmail.net> writes:
You know that sounds now like such an obvious thing to do, but I don't think I ever did it.

I wonder if there was a machine around just to generate listings of card-decks? There were the behemoth card-sorters, and card deck duplicators, but as far as I knew, none made just to get a hardcopy listing.


the university student keypunch room had card sorter, a collator, and a 407 ... these were used from some sporadic tab card stuff the administration hadn't "computerized" .... however for most of time, the 407 normally had a plug board setup for straight 80x80 card listing ... that could be used by anybody.




previous, next, index - home